OpenStack Developer Mailing List Digest December 3 – 9

The content below is taken from the original (OpenStack Developer Mailing List Digest December 3 – 9), to continue reading please visit the site. Remember to respect the Author & Copyright.

Updates:

Creating a New IRC Meeting Room [9]

Neutron Trunk port feature

Ocata Bugsmash Day [3]

PTG Travel Support Program [5][6]

Finish test job transition to Ubuntu Xenial [12]

 

[1] http://bit.ly/2gEmgix

[2] http://bit.ly/2hm3Cdu

[3] http://bit.ly/2gEpItx

[4] http://bit.ly/2hm3Y3S

[5] http://bit.ly/2gEqjva

[6] http://bit.ly/2hm3QBC

[7] http://bit.ly/2gEviMx

[8] http://bit.ly/2hm5X8c

[9] http://bit.ly/2gEn3ji

[10] http://bit.ly/2hm57Zi

[11] http://bit.ly/2gEsPBE

[12] http://bit.ly/2gAMU9a

[13] http://bit.ly/2gEAOP1

Posted on in category News

A Strava add-on will now let you write stories about your rides

The content below is taken from the original (A Strava add-on will now let you write stories about your rides), to continue reading please visit the site. Remember to respect the Author & Copyright.

Strava Storyteller is a new add-on that will let you write stories about your ride, including maps and videos

Your club data will now be accessible through your mobile too
Your club data will now be accessible through your mobile too

Strava Storyteller is a new add-on that will let you write stories about your ride, including maps and videos

Posted on in category News

10 essential PowerShell security scripts for Windows administrators

The content below is taken from the original (10 essential PowerShell security scripts for Windows administrators), to continue reading please visit the site. Remember to respect the Author & Copyright.

PowerShell is an enormous addition to the Windows toolbox that gives Windows admins the ability to automate all sorts of tasks, such as rotating logs, deploying patches, and managing users. Whether it’s specific Windows administration jobs or security-related tasks such as managing certificates and looking for attack activity, there is a way to do it in PowerShell.

Speaking of security, there’s a good chance someone has already created a PowerShell script or a module to handle the job. Microsoft hosts a gallery of community-contributed scripts that handle a variety of security chores, such as penetration testing, certificate management, and network forensics, to name a few.

Posted on in category News

An AWS user’s take on AWS vs. Microsoft Azure and Google Cloud Platform

The content below is taken from the original (An AWS user’s take on AWS vs. Microsoft Azure and Google Cloud Platform), to continue reading please visit the site. Remember to respect the Author & Copyright.

If you’re new to the field, you will want to choose the platform that will help you get started with cloud computing. As a longtime AWS user, I believe that this is an excellent platform for a future cloud user. But there are also valid reasons for being familiar with all of the leading cloud providers. This post is about AWS vs Microsoft Azure and Google Cloud with a focus on the following categories: Compute, analytics, storage, network, and pricing.

skilled-team

AWS vs Microsoft Azure and Google Cloud Platform

First, let’s say a few words about each of the platforms:

  • Amazon Web Services. Launched in 2006, AWS has a bit of a head start on the other platforms. With constant innovations and improvements over the years, the platform now has more than 70 services with a wide range of coverage. AWS servers are available in 14 geographical regions. Market share of the company is steadily growing, reporting 31% market share in the second quarter of 2016.
  • Microsoft Azure. Running since 2010, Microsoft Azure is a complex system that provides support for many different services, programming languages, and frameworks. It has 67 services and data centers in 30 different geographical regions. It currently holds 11% of the market as of Q2 2016.
  • Google Cloud Platform. Introduced in 2011, Google Cloud Platform is the youngest platform. Designed to meet the needs of Google Search and Youtube, it became available to everyone as a part of the Google for Work package. It has more than 50 services and 6 global data centers, with another 8 announced for 2017. With only 5% of market share and quite aggressive expansion, Google’s moment is yet to come.

aws-azure-gcp-logo

Now that we know who are we dealing with, let’s start with our comparison:

Compute

Computing is a fundamental process for your entire business. The advantage of cloud computing is that you have a powerful and expandable computing force at your disposal that is ready when you need it.

The central AWS computing service is Elastic Compute Cloud (EC2). EC2 has become a synonym for scalable computing on demand. Depending on the industry, additions such as AWS Elastic Beanstalk or EC2 Container Services can significantly reduce your costs. At the moment, AWS supports 7 different instance families and 38 instance types. It also offers regional support and zone support at the same time.

The heart of Microsoft Azure computing is Virtual Machines and Virtual Machine Scale Sets, which can be used for processing. Windows client apps can be deployed with the RemoteApp service. Using Azure, you can use 4 different instance families, 33 instance types, and you can place it in different regions. Zone support is not provided.

Google Cloud Platform uses Compute Engine for running computing processes. One disadvantage is that its pricing is less flexible compared to AWS and Azure. It supports most of the main services that you would need such as container deployment, scalability, web and mobile apps processing, etc. Google Cloud supports 4 instance families, 18 different instance types, and provides regional and zone support.

AWS is the clear front runner when it comes to compute power. Not just because it offers you the most learning resources, but also because it provides the best learning platform.

Analytics

Cloud computing platforms provide quite a lot of useful data about your business. All you need to do to is make the proper analysis.

In the field of data analytics, AWS has made an entry to a big data and machine learning. However, if you don’t need extensive data analysis, you can use its Quick Sight service. This service will help you discover patterns and make correct conclusions from the data you’re receiving.

Similarly, Azure has taken steps toward big data and machine learning, but they don’t have a specific offering in these areas.

Google Cloud Platform, however, has the most advanced offering for big data analysis, machine learning, and artificial intelligence.

If you’re looking for a high level of data analytics, Google Cloud Platform is probably the best choice. However, if you just want to keep track of your daily business, AWS will serve you just fine.

analytics

Storage

Storage is an important pillar of cloud computing because it enables us to allocate all sorts of information (needed for our business) in an online location.

The AWS Simple Storage Service, known as S3, is pretty much industry standard. As a result, you will find a wealth of documentation, case studies, webinars, sample codes, libraries, and tutorials to consult, as well as forum discussions where AWS engineers participated. It’s also good to know that S3 is object oriented storage and you can also use Glacier as the archiving service.

Azure and Google Cloud Platform both have quite reliable and robust storage, but you won’t find anywhere near as much documentation and information about them as you will with AWS. They also have working and archive storage and different additional services, but they can’t out-perform AWS.

Here, AWS’s deep resources for new users makes it the clear champion in this category.

Network

It may come in handy to have your network in the cloud. You can have your VPN in an isolated place for your team only. And, it’s a great feature that adds value to your cloud system.

The AWS offering here is quite good. You can use the Virtual Private Cloud to create your VPN and set your network topology, create subnets, route tables, even private IP address ranges, and network gateways. On top of that, you can use Route 53 to have your DNS web service.

Microsoft Azure also has a solid private networking offer. Its Virtual Network (VNET) allows you to set your VPN, have public IP if you want, and use a hybrid cloud, firewall, or DNS.

Google Cloud Platform’s offering is not as extensive. It has the Cloud Virtual Network, and supports subnet, Public IP, firewall protection, and DNS.

The networking category winner is AWS because it has the most reliable DNS provider.

Pricing

At the end of the day, everyone wants to know: “So, how much is that going to cost me?” Because prices for each provider will be formed according to your needs and requirements we can’t quote exact costs here. However, we can tell you about the pricing models that each provider is using.

AWS uses three payment models:

  • On demand: You pay only for the resources and services you use
  • Reserve: Choose the quantity of resources that you want to book upfront for 1 to 3 years and pay based on utilization
  • Spot: Take advantage of unused capacity and bid with others for additional space

Please note that AWS charges are rounded by the hour used.

Azure pricing is a bit flexible and charges per minute, by rounding per commitments. Their pricing models aren’t as flexibile compared to other platforms. Sustained use pricing is created to enable discounts in the case of on-demand use if a particular instance is used for a larger percentage of the month.

GCP pricing is similar to Azure. They also charge per minute, rounding in 10 minutes per period. In addition to on-demand charging, GCP offers sustained use discounting, which means that you will get a discount for regular usage.

Pricing models are a bit tricky. Each platform offers a pricing calculator that can help you estimate costs. If you consider using AWS, I would suggest that you get in touch with a local APN company, and they can help you estimate your monthly costs.

 

Posted on in category News

RISC OS Interview – Rob Sprowson

The content below is taken from the original (RISC OS Interview – Rob Sprowson), to continue reading please visit the site. Remember to respect the Author & Copyright.

We continue with our series of interviews with people in the RISC OS world. In this interview, we catch-up with Elesar’s Rob Sprowson.

If you have any other questions, feel free to join in the discussion.

If you would have any suggestions for people you would like us to interview, or would like to be interviewed, just let us know….

Would you like to introduce yourself?
Could-have-been basketball player, still not tall enough.

How long have you been using RISC OS?
That’s patchy: starting with an Acorn Electron which my sister and I eventually broke through overuse, then a big gap until picking up a 2nd hand BBC Micro from the local newspaper in the mid 1990’s, then in parallel RISC OS from 1997ish would make either 33 or 19 years depending on which you count.
Oh dear, now I can’t claim I’m 21 any more either.

What other systems do you use?
Mostly Windows because of the specialist CAD software and other electronics design tools I need to use daily. I have some VMs saved with Linux and FreeBSD but they’re mostly for testing things or recompiling NetSurf, I don’t really know what I’m doing but as they’re VMs it doesn’t matter too much if I destroy something through careless typing.

What is your current RISC OS setup?
Singular? Nothing’s that simple. For email I use a Risc PC (well, more specifically my monitors are stacked vertically and I’m too lazy to remove the Risc PC holding the whole pile up – those cases are built like brick bunkers).
For development, a Titanium of course, it’s nice to do an experimental OS rebuild in 1 minute or less as I don’t like tea and have trouble finding other things to do that take the ‘time it takes to boil a kettle’.
Then there are piles and cupboards and boxes of other things of other vintages which get dragged out for compatibility testing, erm, more than 15 if you include Raspberry Pi’s though some of them are on loan rather than machines I myself own.

Do you attend any of the shows and what do you think of them?
This year I got wheeled out on behalf of ROOL for Wakefield and the South West show. Shows are great to hear what normal users think and what they can’t do but would like to, being too deeply buried in the inner workings of something makes it very difficult to see that.
Some of the shows could be freshened up a bit rather than repeating the ‘tables & chairs’ format every year, to attract a larger audience – the show organisers should visit similar trade shows or enthusiast conventions to steal ideas to improve the presentation of RISC OS ones.

What do you use RISC OS for in 2016 and what do you like most about it?
I like that the OS doesn’t get in my way. If I want to save something in the root directory of my harddisc there’s no patronising error box popping up asking me to confirm that. I used to work with someone who had a book on usability called "Don’t make me think", and that seems a good mantra to work by.

What is your favourite feature/killer program in RISC OS?
Obligatory plug for Pluto here: Pluto Pluto Pluto. Oh, did I mention Pluto?

What would you most like to see in RISC OS in the future?
The bounty scheme that ROOL runs seems to have a good selection of sensible "big ticket" items in, so I’d go with that since Ben/Steve/Andrew know their onions.
Reasonably frequently someone will ask on their forum "is feature X available" when there’s a bounty for X already open, but you never see the total going up so I guess they’re a source of hot air rather than stumping up just a tenner to help make something happen. The world runs on these shiny money tokens in our pockets, so people shouldn’t get too upset if you ask someone to do something for nothing and nothing happens.

Can you tell us about what you are working on in the RISC OS market at the moment?
There are a couple of CloudFS enhancements in the immediate pipeline, but it
tends to get busy at Elesar which is distracting, because some of the
protocols to talk to the servers are eye wateringly complicated and you
really need to be ‘in the zone’ to work on them.

Any surprises you can’t or dates to tease us with?
There are 3 hardware projects and 3 software projects on RISC OS side of the Elesar hob. I tend to come up with ideas faster than they can be implemented, so sometimes things get culled because they’re superceded or because during the derisking stage it becomes apparent that by the time they’re finished they’d no longer be commercially viable.

Apart from iconbar (obviously) what are your favourite websites?
Iconbar who?

Santa Claus is a regular iconbar reader. Any not-so-subtle hints you would like to drop him for presents this year (assuming you have been very good)?
A time machine, and a whole cod, to go back in time and slap some people with. You know who you are…I’m coming for you.

Do you have any New Year’s Resolutions for 2017?
No, I don’t believe in that mumbo jumbo. Only humans attach significance to January 1st; we’re just orbiting the sun same as the previous day.

Any questions we forgot to ask you?
How many mouse buttons I’ve worn out? 2 I think, but fortunately the micro switches are easy to replace and good for another 1 million clicks!

Elesar website

No comments in forum

Posted on in category News

Top 10 Ways to Speed Up Old Technology

The content below is taken from the original (Top 10 Ways to Speed Up Old Technology), to continue reading please visit the site. Remember to respect the Author & Copyright.

Even if you are building a brand new computer, odds are you have some old gear around the house you’d like to get as much life out of as possible. From phones to old laptops to old TVs, here are some tips to speed up and clean up your older tech.

Read more…

Posted on in category News

How to add & use Pickit Free Images add-in to Microsoft Office

The content below is taken from the original (How to add & use Pickit Free Images add-in to Microsoft Office), to continue reading please visit the site. Remember to respect the Author & Copyright.

Presentations should be illustrative, not exhaustive and it is an image that makes the presentation illustrative. This helps us in many ways. For instance, it helps us emphasize a point without any ambiguity. A new add-in for Microsoft Office – Pickit is designed just for this purpose.

Pickit makes it convenient for Microsoft Office customers to tell their stories by leveraging specially curated photos. The add-in is designed to works in all Office apps like,  OneNote 2016 or later, PowerPoint 2016, Word 2016. Besides, Pickit plugin is also compatible with Mac and online version of Office applications.

Pickit Free Images add-in for Office

If you have Office 365 installed on your system, launch PowerPoint application and hit the ‘Insert’ tab.

Pickit Free Images add-in for Office

Next, navigate to ‘Store’ and look for ‘Pickit’ add-in, and select it.

Now, you have authentic visuals from the world’s leading image makers, right at your fingertips and in the task pane.

Once downloaded, the Pickit icon will appear as a button in the PowerPoint and Word ribbons.

Just carry out a keyword search or select a category to find images you are looking for.

All images are legal and free to use. No license or additional cost involved.

Pickit appears perfect option for presentations as it offers a quick and easy way to bring your work to life, without leaving your presentation.

When you are not sure what to search for, just browse for Pickit professionally curated collection. There’s a new image collection, “Talk Like a Rosling,” which features inspired content from statistician and presenter Hans Rosling and the latest project from his team at Gapminder—Dollar Street.

You can download the Pickit add-in from the Office Store in the Office apps or the web. For more information or to add the Pickit add-in, visit office.com.



Posted on in category News

“Dear Boss, I want to attend the OpenStack Summit”

The content below is taken from the original (“Dear Boss, I want to attend the OpenStack Summit”), to continue reading please visit the site. Remember to respect the Author & Copyright.

Want to attend the OpenStack Summit Boston but need help with the right words for getting your trip approved? While we won’t write the whole thing for you, here’s a template to get you going. It’s up to you to decide how the Summit will help your team, but with free workshops and trainings, technical sessions, strategy talks and the opportunity to meet thousands of likeminded Stackers, we don’t think you’ll have a hard time finding an answer.

 

Dear [Boss],

All I want for the holidays is to attend the OpenStack Summit in Boston, May 8-11, 2017. The OpenStack Summit is the largest open source conference in North America, and the only one where I can get free OpenStack training, learn how to contribute code upstream to the project, and meet with other users to learn how they’ve been using OpenStack in production. The Summit is an opportunity for me to bring back knowledge about [Why you want to attend! What are you hoping to learn? What would benefit your team?] and share it with our team, while helping us get to know similar OpenStack-minded teams around the world (think 60+ countries and nearly 1,200 companies represented).

If I register before mid-March, I get early bird pricing–$600 USD for 4 days (plus an optional day of training). Early registration also allows me to RSVP for trainings and workshops as soon as they open (they always sell out!), or sign up to take the Certified OpenStack Administrator exam onsite.

At the OpenStack Summit Austin last year, over 7,800 attendees heard case studies from Superusers like AT&T and China Mobile, learned how teams are using containers and container orchestration like Kubernetes with OpenStack, and gave feedback to Project Teams about user needs for the upcoming software release. You can browse past Summit content at openstack.org/videos to see a sample of the conference talks.

The OpenStack Summit is the opportunity for me to expand my OpenStack knowledge, network and skills. Thanks for considering my request.

[Your Name]

Posted on in category News

For God’s sake, stop trying to make Microsoft Bob a thing. It’s over

The content below is taken from the original (For God’s sake, stop trying to make Microsoft Bob a thing. It’s over), to continue reading please visit the site. Remember to respect the Author & Copyright.

Vid Microsoft has entered the virtual reality race, announcing a new headset called Evo in collaboration with Intel.

The headset will have the same advanced features of current high-end products including the Oculus Rift, HTC Vive and soon-to-be-launched Sulon Q, but will work with mid-range laptops, the company said.

Up to now, Microsoft has focused its VR efforts on augmented reality (AR) and its Hololens glasses that add digital elements on a screen that you look through to the real world. The Evo will go full VR, covering your eyes and then reflecting an augmented reality back to you.

Critically, the Evo will allow for “inside-out” spacial awareness, meaning that sensors will be built into the headset to allow you to walk around a physical space with the headset, rather than requiring that external sensors be set up within your room to define a space.

That inside-out technology is what Sulon Q hopes to help it get first maneuver advantage on the market when it launches early next year, while both the Rift and Vive are furiously working on their own versions.

Promotional videos for Microsoft’s new Evo also show the headset as being wireless – again, something that Sulon Q is pushing as a unique advantage to its system (it has a full Windows 10 computer built into the headset), and something that both Oculus and Vive are working on.

At the moment, high-end headsets have to be physically connected by a wire to a high-spec computer. The Evo, on the other hand, will be wirelessly paired with a computer to achieve, well, this:

Youtube Video

Hm, what does this remind us of? Oh yes, that’s right. Microsoft Bob. Which it tried to make a success in 2015. And failed. It even has the little dog in the corner, too. Maybe 2017 will be kinder.

Youtube Video

Back to 2016: if Microsoft announced a new VR headset that wasn’t wireless and didn’t have inside-out tracking, it would have been laughed out of the room. The big question is when will it launch?

And on that Microsoft is being wildly vague. It says its Hololens should be available “in the first half of 2017,” and it says it has already shared the specs for PCs that will power its new headset, with those PCs available “next year.” It says developer kits will be made available to developers at the Game Developers Conference in San Francisco in February.

And it announced that the hardware developer 3Glasses will “bring the Windows 10 experience to their S1 device in the first half of 2017” – but that’s not the same as saying Microsoft Evo headsets will be available by then.

Incidentally the minimum specs for the new headsets are:

  • Intel Mobile Core i5 dual-core
  • Intel HD Graphics 620 (GT2) or equivalent
  • 8GB RAM
  • HMDI 1.4 or 2
  • 100GB drive (preferably solid state)
  • Bluetooth 4.0

Taking all the announcements together, it looks as though Microsoft is aiming at a Q3 or Q4 2017 launch of its VR headset – a timeline that is likely to give Oculus, Vive and Sulon a few months’ head start, but probably not enough of one to steal the market.

Where Microsoft and Intel really could win, however, is if they do manage to create a good VR system that requires a less powerful machine to run. That would pull down the price tag for the whole system and place it above the current best offering on the market – the PlayStation VR – in terms of quality. ®

Sponsored:
Magic quadrant for enterprise mobility management suites

Posted on in category News

For God’s sake, stop trying to make Microsoft Bob a thing. It’s over

The content below is taken from the original (For God’s sake, stop trying to make Microsoft Bob a thing. It’s over), to continue reading please visit the site. Remember to respect the Author & Copyright.

Vid Microsoft has entered the virtual reality race, announcing a new headset called Evo in collaboration with Intel.

The headset will have the same advanced features of current high-end products including the Oculus Rift, HTC Vive and soon-to-be-launched Sulon Q, but will work with mid-range laptops, the company said.

Up to now, Microsoft has focused its VR efforts on augmented reality (AR) and its Hololens glasses that add digital elements on a screen that you look through to the real world. The Evo will go full VR, covering your eyes and then reflecting an augmented reality back to you.

Critically, the Evo will allow for “inside-out” spacial awareness, meaning that sensors will be built into the headset to allow you to walk around a physical space with the headset, rather than requiring that external sensors be set up within your room to define a space.

That inside-out technology is what Sulon Q hopes to help it get first maneuver advantage on the market when it launches early next year, while both the Rift and Vive are furiously working on their own versions.

Promotional videos for Microsoft’s new Evo also show the headset as being wireless – again, something that Sulon Q is pushing as a unique advantage to its system (it has a full Windows 10 computer built into the headset), and something that both Oculus and Vive are working on.

At the moment, high-end headsets have to be physically connected by a wire to a high-spec computer. The Evo, on the other hand, will be wirelessly paired with a computer to achieve, well, this:

Youtube Video

Hm, what does this remind us of? Oh yes, that’s right. Microsoft Bob. Which it tried to make a success in 2015. And failed. It even has the little dog in the corner, too. Maybe 2017 will be kinder.

Youtube Video

Back to 2016: if Microsoft announced a new VR headset that wasn’t wireless and didn’t have inside-out tracking, it would have been laughed out of the room. The big question is when will it launch?

And on that Microsoft is being wildly vague. It says its Hololens should be available “in the first half of 2017,” and it says it has already shared the specs for PCs that will power its new headset, with those PCs available “next year.” It says developer kits will be made available to developers at the Game Developers Conference in San Francisco in February.

And it announced that the hardware developer 3Glasses will “bring the Windows 10 experience to their S1 device in the first half of 2017” – but that’s not the same as saying Microsoft Evo headsets will be available by then.

Incidentally the minimum specs for the new headsets are:

  • Intel Mobile Core i5 dual-core
  • Intel HD Graphics 620 (GT2) or equivalent
  • 8GB RAM
  • HMDI 1.4 or 2
  • 100GB drive (preferably solid state)
  • Bluetooth 4.0

Taking all the announcements together, it looks as though Microsoft is aiming at a Q3 or Q4 2017 launch of its VR headset – a timeline that is likely to give Oculus, Vive and Sulon a few months’ head start, but probably not enough of one to steal the market.

Where Microsoft and Intel really could win, however, is if they do manage to create a good VR system that requires a less powerful machine to run. That would pull down the price tag for the whole system and place it above the current best offering on the market – the PlayStation VR – in terms of quality. ®

Sponsored:
Magic quadrant for enterprise mobility management suites

Posted on in category News

Free ebook: Containerized Docker Applications Lifecycle with Microsoft Tools and Platform

The content below is taken from the original (Free ebook: Containerized Docker Applications Lifecycle with Microsoft Tools and Platform), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft is offering a new free ebook titled, Containerized Docker Applications Lifecycle with Microsoft Tools and Platform , by Cesar de la Torre…. Read more at VMblog.com.

Posted on in category News

Bluetooth 5 is out: Now will home IoT take off?

The content below is taken from the original (Bluetooth 5 is out: Now will home IoT take off?), to continue reading please visit the site. Remember to respect the Author & Copyright.

Bluetooth is aiming straight for the internet of things as the fifth version of the wireless protocol arrives with twice as much speed for low-power applications.

Bluetooth Low Energy (BLE), which gains the most from the new Bluetooth 5 specification, can now go as fast as 2Mbps (bits per second) and typically can cover a whole house or a floor of a building, the Bluetooth Special Interest Group (SIG) said Wednesday. Those features could help to make it the go-to network for smart homes and some enterprise sites.

The home IoT field is pretty open right now because most people haven’t started buying things like connected thermostats and door locks, ABI Research analyst Avi Greengart said. Bluetooth starts out with an advantage over its competition because it’s built into most smartphones and tablets, he said. Alternatives like ZigBee and Z-Wave often aren’t.

“It’s easy to predict that within two to three years, pretty much every phone will have Bluetooth 5,” Greengart said. “Sometimes ubiquity is the most important part of a standard.”

As the new protocol rolls out to phones, users should be able to control Bluetooth 5-equipped devices without going through a hub.

Bluetooth is in a gradual transition between two flavors of the protocol. The “classic” type is what’s been linking cellphones to cars and mice to PCs for years. BLE, a variant that uses less power, can work in small, battery-powered devices that are designed to operate for a long time without human interaction.

BLE devices now outnumber classic Bluetooth products and most chips include both modes, said Steve Hegenderfer, director of developer programs at the Bluetooth SIG.

With Bluetooth 5, BLE matches the speed of the older system, and in time, manufacturers are likely to shift to the low-power version, he said.

Range has quadrupled in Bluetooth 5, so users shouldn’t have to worry about getting closer to their smart devices in order to control them. Also, things like home security systems – one of the most common starting points for smart-home systems — will be able to talk to other Bluetooth 5 devices around the house, Parks Associates analyst Tom Kerber said.

Another enhancement in the new version will help enterprises use Bluetooth beacons for location. BLE has a mechanism for devices to broadcast information about what they are and what they can do so other gear can coordinate with them. Until now, those messages could only contain 31 bytes of information.

Now they can be eight times that size, making it easier to share information like the location and condition of enterprise assets, such as medical devices in hospitals. Google’s Physical Web concept, intended to let users easily interact with objects, is based on BLE beacons.

Bluetooth still needs to fill in a few pieces of the puzzle, ABI’s Greengart said.

The new, longer range is an improvement, but a mesh would be better, he said. In a mesh configuration, which is available in competing networks like ZigBee and Thread, each device only needs to connect with the one closest to it. That takes less power, and it’s better than relying on each device’s range to cover a home, because walls and other obstacles can keep signals from reaching their full range, he said. The Bluetooth SIG is at work on a mesh capability now.

Consumers are also waiting for a high-fidelity audio connection to wireless headphones, a need that’s getting more urgent as phone makers phase out physical jacks, Greengart said. As with mesh, it’s coming from Bluetooth but not here yet.

Although Bluetooth 5 makes strides that could help drive IoT adoption, the field is still open, he said. “There’s room for almost any solution to succeed, including Wi-Fi.”

Join the Network World communities on

Facebook

and

LinkedIn

to comment on topics that are top of mind.

Posted on in category News

Best Practices for Domain Controller VMs in Azure

The content below is taken from the original (Best Practices for Domain Controller VMs in Azure), to continue reading please visit the site. Remember to respect the Author & Copyright.

cloud-hand-hero-img

cloud-hand-hero-img

This post will explain the best practices and support policies for deploying domain controllers (DCs) as virtual machines in Microsoft Azure.

What About Azure AD Domain Services?

In the not too distant past, if you wanted to run an application in the cloud with domain membership and consistent usernames and passwords, then you had no choice – you had to deploy one or more (preferably 2 or more) domain controllers as virtual machines in the cloud. Azure Active Directory (AD) didn’t offer domain membership, and couldn’t offer the same type of username/password authentication and authorization that you get with Active Directory Domain Services.

 

 

However, things have changed … slightly. Azure AD has recently added Domain Services as a generally available and supported feature. But be careful; Azure AD Domain Services might not be what you think it is!

Azure AD Domain Services allows you to deploy a domain-dependent application in the cloud without the additional cost of virtual machines that are functioning as domain controllers. However, Azure AD Domain Services is not another domain controller in your existing domain – in fact, it is not even your existing domain. Using Azure AD Connect you can clone your domain into Azure AD Domain Services. This means that your Organizational Units (OUs), group policies, groups, and so on can live on in the cloud, but in a different domain that is a clone of your on-premises domain.

Stretching an Active Directory domain to Azure virtual machines [Image Credit: Aidan Finn]

Stretching an Active Directory domain to Azure virtual machines [Image Credit: Aidan Finn]

If you want your on-premises AD forest to be truly extended into the cloud, then today, the best option is to continue to use virtual machines running the Active Directory Domain Services role. I do suspect that this will eventually change (I hope that AD goes the way of Exchange). My rule of thumb is this: if I want a hybrid cloud with cross-site authentication and authorization, then I will run domain controllers in the cloud.

Backup

Running DCs as virtual machines in Azure is safe, as long as you follow some rules. If you are running domain controllers running an OS that is older than Windows Server 2012 (WS2012), then you should never copy a domain controller’s virtual hard disks or restore it from backup. Azure supports the VM-GenerationID features of WS2012, so you can safely restore domain controllers from backup.

There is a bit of a “gotcha” with this VM-GenerationID feature. The normal practice to shut down virtual machines in Azure is to do so from the portal or PowerShell. Doing so will deallocate the virtual machine and reset the VM-GenerationID, which is undesirable. We should always shut down domain controllers using the shutdown command in the guest OS, otherwise:

IP Configuration

You should never configure the IP configuration of an Azure virtual machine in the guest OS. A new domain controller will complain about having a DHCP configuration – let it complain because there will be no harm if you follow the correct procedures.

Edit the settings of the NIC of each virtual domain controller in the Azure Portal. Set the NIC to use a static IP address and record this IP address. Your new DC(s) will be the DNS servers of your network; open the settings of the virtual network (VNet) and configure the DNS server settings to use the IP addresses of your new domain controllers.

Note that if you are adding a new domain controller to an existing on-premises domain, then you will need a site-to-site network connection and you should temporarily configure the VNet to use the IP address of one of your on-premises DCs as a DNS server; this will allow your new cloud-based DC to find the domain so that it can join it.

Sponsored

Domain Controller Files

I rarely pay attention to anything in the wizard when promoting a new domain controller; it’s all next-next-next, and I doubt I’m unique. However, there is one very important screen that you must not overlook.

Azure implements write caching on the OS disk of virtual machines. This will cause an issue for databases such as AD, which can lead to corruption such as a USN rollback. You must add a data disk, with caching disabled, to the virtual machine and use this new volume to store:

There is no additional cost for this if you use standard storage disks; standard storage is billed for based on data stored, not the overall size of deployed disks. Note that Azure Backup instance charges are based on the size of the disks, but you shouldn’t need so much data that you’ll exceed the 50GB-500GB price band to incur additional instance charges.

Active Directory Topology

If you work in a large enterprise, then you’ve probably already realized that it would be a good idea to define an AD topology for your new site (Azure). However, many of you work in the small-to-midsized enterprise world, so you’ve never had to do much in AD Sites and Services.

You should deploy the following for each region that you deploy AD DCs into:

You can perform some advanced engineering of AD replication to reduce outbound data transfer costs. Be careful because some advanced AD engineering can have unintended consequences!

Read-Only Domain Controllers

RODCs are supported in Azure. You can choose to deploy RODCs in Azure if you need to restrict what secrets are stored in the cloud; you can filter which attributes are available in the cloud if you wish. Most Windows roles work well with RODCs, but make sure that your applications will work well and not become overly dependent on site-to-site network links.

Global Catalog Servers

Every DC in a single-domain forest should be a global catalog server; this does not incur any additional replication traffic (outbound data transfer) costs.

However, multi-domain forests use universal groups and these require careful placement and usage of global catalog (GC) servers. You should place at least one GC server in Azure if you require the multi-domain forest to continue authenticating users if the site-to-site link fails – a GC is required to expand universal group membership, and a DC must verify that the user is not in a universal group with a DENY permission.

Note that the placement or lack of placement of GCs will impact traffic if you have stretched a multi-domain AD forest to the cloud:

ADFS and Azure AD Connect

One of the risks of using ADFS to integrate your AD forest with Azure AD is that all of your cloud services will be unavailable if Azure AD cannot talk to your ADFS cluster. The simplest solution to this is to move ADFS (and some domain controllers) from on-premises to Azure, effectively putting your critical component next door to the service that requires reliable connectivity.

Sponsored

I have also opted to deploy Azure AD Connect in an Azure virtual machine. The benefit is that in a disaster recovery scenario, my connection to Azure AD is already running in the cloud. On the downside, I need to realize that it can take up to 15 minutes (with the most frequent option in an AD site link) from an on-premises AD site to replicate to a site in Azure, and then up to 30 minutes (the default and most frequent replication option in Azure AD Connect) for changes to appear in Azure AD – you can manually trigger inter-site replication in AD and Azure AD Connect.

 

The post Best Practices for Domain Controller VMs in Azure appeared first on Petri.

Posted on in category News

OpenFog Consortium: It’s Been a Very Good Year

The content below is taken from the original (OpenFog Consortium: It’s Been a Very Good Year), to continue reading please visit the site. Remember to respect the Author & Copyright.

Fog computing is gaining traction across industries and academia, and across the world.  In just one year, the OpenFog Consortium has grown from six founding members to 51 members in 14 countries—and still counting! But it’s not just this flood of interest that is impressive—it’s the work our members are doing together to accelerate fog […]

Posted on in category News

Google and Slack deepen partnership in the face of Microsoft Teams

The content below is taken from the original (Google and Slack deepen partnership in the face of Microsoft Teams), to continue reading please visit the site. Remember to respect the Author & Copyright.

Slack and Google have vastly deepened their partnership roughly a month after Microsoft announced its competitor to the popular enterprise chat service.

Wednesday saw the announcement of several new features aimed at making G Suite, Google’s set of productivity software and services, more useful to people who use Slack. The functionality resulting from the partnership will make it easier to share and work on files stored in Google Drive using Slack.

Slack and Google were early partners during the lifecycle of the chat service which gives business users a set of rooms where they can discuss work, share files and more. Microsoft recently announced Teams, a similar service integrated into Office 365 that’s currently in beta.

A representative for Slack said via email that Microsoft’s introduction of a competitive product that is expected to become generally available next year didn’t have an impact on the company’s decision to deepen its integration with Google.

But one way or another, the partnership has clear benefits for both companies. Slack becomes more useful for G Suite users, and Google gets to make its productivity suite a more attractive offering to those organizations that want to use Slack.

In a thoroughly modern turn, Google is building a Drive Bot, which will inform users about changes to a file, and let them approve, reject and settle comments in Slack, rather than opening Google Docs. It goes along with Slack’s continuing embrace of bots as a key part of the chat service’s vision of productivity.

When users share a file from Drive in Slack, the chat service will check the sharing permissions on the file and make sure that it’s set up so that everyone in a channel can access what’s being shared. If the settings don’t match, users will be asked to modify them.

To go along with that, users will be able to associate Team Drive folders with particular Slack channels. That means the Marketing folder for members of the marketing team inside a company can also be linked to the team’s Slack channel.

When users upload files to a Slack channel with a linked Team Drive, the files will be backed up inside Google’s cloud storage service. When Team Drive files get changed, users will be notified using Slack.

In addition, Google and Slack are working together to give users access to previews of Google Docs that they share in Slack, so it’s possible for people to see inside a file at a glance without having to open it.

It’s not yet clear when all of this functionality will be making its way into Slack, however. The chat service startup will let users sign up for notifications about the forthcoming updates here.

Join the Network World communities on

Facebook

and

LinkedIn

to comment on topics that are top of mind.

Posted on in category News

Codemade Is a Big Collection of Open-Source Electronics Projects

The content below is taken from the original (Codemade Is a Big Collection of Open-Source Electronics Projects), to continue reading please visit the site. Remember to respect the Author & Copyright.

When it comes to tracking down DIY electronics project ideas, you’ve got a lot of solid web sites out there. Codemade is a web app that gathers a bunch of those sources together.

Read more…

Posted on in category News

UK vinyl sales made more money than music downloads last week

The content below is taken from the original (UK vinyl sales made more money than music downloads last week), to continue reading please visit the site. Remember to respect the Author & Copyright.

Digital music might be the future, but legacy formats like vinyl aren’t going away any time soon. New figures from the Entertainment Retailers Association (ERA) have shown that more money was spent on vinyl records than digital music downloads in the UK last week, highlighting a significant shift in how consumers are choosing to buy their music.

Figures show that during week 48 of 2016, consumers spent £2.4 million on vinyl, while downloads took £2.1 million. Compare that to the same period last year when £1.2 million was spent on records, with digital downloads bringing in £4.4 million. The ERA puts the surge in sales down to recent shopping events like Black Friday and the popularity of the format as a Christmas gift. It’s also helped by the fact that Sainsbury’s and Tesco now stock records in many of their branches.

It’s welcome news for vinyl lovers and the music industry in general, but digital music is also going from strength to strength. Instead of buying music to keep, Brits are increasingly turning to streaming services like Spotify to get their music fix. Last weekend, The Weeknd broke streaming records on Spotify after his new album was streamed 40 million times on day one and 223 million times in its first week.

It’s also worth considering that vinyl albums are often a lot more expensive than downloads. BBC News reports that last week’s biggest-selling vinyl was Kate Bush’s triple-disc live album Before The Dawn, which costs around £52. The same album is £13 on Amazon. Downloaded albums are still more popular, though: last week saw 295,000 digital downloads versus 120,000 vinyl album sales.

Recent research suggests that some people don’t even buy vinyl to listen to, with 7 percent of collectors admitting they don’t own a record player. It’s believed that some buy records to help support artists they like, while others may use the sleeves to decorate their home.

Via: BBC News

Posted on in category News

Say goodbye to MS-DOS command prompt

The content below is taken from the original (Say goodbye to MS-DOS command prompt), to continue reading please visit the site. Remember to respect the Author & Copyright.

My very first technology article, back in 1987, was about MS-DOS 3.30. Almost 30 years later, I’m still writing, but the last bit of MS-DOS, cmd.exe — the command prompt — is on its way out the door.

It’s quite possible that you have been using Microsoft Windows for years — decades, even — without realizing that there’s a direct line to Microsoft’s earliest operating system or that an MS-DOS underpinning has carried over from one Windows version to another — less extensive with every revision, but still there nonetheless. Now we’re about to say goodbye to all of that.

Interestingly, though, there was not always an MS-DOS from Microsoft, and it wasn’t even dubbed that at birth. The history is worth reviewing now that the end is nigh.

Back in 1980, the ruling PC operating system was Digital Research’s CP/M for the z80 processor. At the same time, Tim Patterson created Quick and Dirty Operating System (QDOS). This was a CP/M clone with a better file system for the hot new processor of the day, the 8086. At the time, no one much cared.

Until, that is, IBM decided to build an 8086-based PC. For this new gadget, IBM needed to settle on programming languages and an operating system. It could get the languages from a small independent software vendor called Microsoft, but where could it get an operating system?

The obvious answer, which a 25-year-old Bill Gates seconded, was to go straight to the source: CP/M’s creator and Digital Research founder, Gary Kildall. What happened next depends on whom you believe. But whether Kildall was really out flying for fun when IBM came by to strike a deal for CP/M for the x86 or not, he didn’t meet with IBM, and they didn’t strike a deal.

So IBM went back to Microsoft and asked it for help in finding an operating system. It just so happened that Paul Allen, Microsoft’s other co-founder, knew about QDOS. Microsoft subsequently bought QDOS for approximately $50,000 in 1981. Then, in short order, IBM made it one of the PC’s operating systems, Microsoft renamed QDOS to MS-DOS, and, crucially, it got IBM to agree that Microsoft could sell MS-DOS to other PC makers. That concession was the foundation on which Microsoft would build its empire.

Late last month, in Windows 10 Preview Build 14791, the command prompt was put out to pasture. Dona Sarkar, head of the Windows Insider Program, wrote, “PowerShell is now the defacto command shell from File Explorer. It replaces Command Prompt (aka, cmd.exe).”

That “defacto” suggests that it’s not all over for the command prompt. And it’s true that you can still opt out of the default by opening Settings > Personalization > Taskbar, and turning “Replace Command Prompt with Windows PowerShell in the menu when I right-click the Start button or press Windows key+X” to “Off.”

But you might as well wave bye-bye to the old command prompt. Build 14791 isn’t just any beta. It’s the foundation for the Redstone 2 upgrade, a.k.a. Windows 10 SP2. This is the future of Windows 10, and it won’t include this oldest of Microsoft software relics.

PowerShell, which just turned 10, was always going to be DOS’s replacement. It consists of a command-line shell and a .Net Framework-based scripting language. PowerShell was added to give server administrators fine control over Windows Server. Over time, it has become a powerful system management tool for both individual Windows workstations and servers. Command.com and its NT-twin brother, cmd.exe, were on their way out.

They had a good run. A good way to understand how they held out for so long is to look at DOS as a house under constant renovation.

First, all there was was the basic structure, the log cabin, if you will, of Microsoft operating systems. That log cabin was given a coat of paint, which is what Windows 1.0 amounted to — MS-DOS all the way, with a thin veneer of a GUI. Over time, Microsoft completely changed the façade in ways that made the old log cabin completely unrecognizable.

With Windows NT in 1993, Windows started replacing the studs and joists as well. Over the years, Microsoft replaced more and more of MS-DOS’s braces and joints with more modern and reliable materials using improved construction methods.

Today, after decades, the last pieces of the antique structure are finally being removed. All good things must come to an end. It’s way past time. Many security problems in Windows trace back to its reliance on long-antiquated software supports.

Still, it’s been fun knowing you, MS-DOS. While you certainly annoyed the heck out of me at times, you were also very useful back in your day. I know many programmers and system administrators who got their start with you on IBM PCs and clones. So, goodbye and farewell.

While few users even bothered to look at you these days, you helped launch the PC revolution. You won’t be forgotten.

This story, “Say goodbye to MS-DOS command prompt” was originally published by

Computerworld.

Posted on in category News

Announcing our new online training series

The content below is taken from the original (Announcing our new online training series), to continue reading please visit the site. Remember to respect the Author & Copyright.

At the end of this week, with our final Picademy of 2016 taking place in Texas, we will have trained over 540 educators in the US and the UK this year, something of which we’re immensely proud. Our free face-to-face training has proved hugely popular: on average, we receive three eligible applications for each available place! However, this model of delivery is not without its limitations: after seeing our Picademy attendees getting excited on Twitter, we often get questions like: “Why haven’t you run a Picademy near me yet? When are you coming to train us?”

Cartoon: an apparently clothes-less man sits at a desk with a keyboard, monitor and Raspberry Pi, on a tiny sandy island surrounded by blue sea. A stick figure wearing a Pi T-shirt waits under a palm tree beside him to be taught about computing using Raspberry Pi. A shark repeatedly circles their island.

We grew frustrated at having to tell people that we didn’t have plans to provide Picademy in their region in the foreseeable future, so we decided to find a way to reach educators around the world with a more accessible training format.

We’re delighted to announce a new way for people to learn about digital making from Raspberry Pi: two free online CPD training courses, available anywhere in the world. The courses will run alongside our face-to-face training offerings (Picademy, Skycademy, and Code Club Teacher Training), and are facilitated by FutureLearn, a leading platform for online educational training. This new free training supports our commitment to President Obama’s Computer Science For All initiative, and we’re particularly pleased to be able to announce it just as Computer Science Education Week is getting underway. Here’s the lowdown on what you can expect:

Course 1: Teaching Physical Computing with Raspberry Pi and Python

In the foreground, a Raspberry Pi computer with a small "traffic lights" board attached; it has red, yellow and green LEDs, and the yellow one is lit. In the background, various electronics components, a mouse and a keyboard.

This four-week course will introduce you to physical computing, showing you how easy it is to create a system that responds to and controls the physical world, using computer programs running on the Raspberry Pi. You’ll apply your new-found knowledge to a series of challenges, including controlling an LED with Python, using a button press to control a circuit, and making a button and LED game.

If you’re a teacher, you’ll also have the chance to develop ideas for using Raspberry Pi and Python in your classroom, and to connect with a network of other educators.

Course 2: Teaching Programming in Primary Schools

A boy and a girl aged around 9-11 years sit at a desk in a classroom, focussed on the activity they are doing using laptops. From the girl's screen we can see that she is programming in Scratch. A man sits beside the boy, helping him with his work.

This four-week course will provide a comprehensive introduction to programming, and is designed for primary or K-5 teachers who are not subject specialists. Over four weeks, we’ll introduce you to key programming concepts. You’ll have the chance to apply your understanding of them through projects, both unplugged and on a computer, using Scratch as the programming language. Discover common mistakes and pitfalls, and develop strategies to fix them.

Registration opens today, with the courses themselves starting mid-February 2017. We hope they will inspire a new army of enthusiastic makers around the world!

Visit our online training page on FutureLearn.

The post Announcing our new online training series appeared first on Raspberry Pi.

Posted on in category News

Brad Dickinson | Qarnot’s Home Heating Servers Now Plugged into Data Centers

Qarnot’s Home Heating Servers Now Plugged into Data Centers

The content below is taken from the original (Qarnot’s Home Heating Servers Now Plugged into Data Centers), to continue reading please visit the site. Remember to respect the Author & Copyright.