Have you moved into a public cloud lately? The first step is to choose the size of the machine instance from a standard machine configuration that has enough vCPUs and enough memory. Of course, cloud providers offer custom machine instances, so you can pick the exact right amount of vCPUs and memory.
But whether it’s a standard or a custom machine instance, enterprises simply guess at the correct size, using on-premises systems as a guide. It’s a logical approach, but it’s not realistic. You rarely run the same workloads on the same server types in the clouds. Moreover, most applications will undergo some refactoring before they end up in the cloud. It’s apples and oranges.
As a result, many enterprises overestimate the resources they need, so they waste money. Some underestimate the resources they need and, thus, suffer performance and stability problems.
Cloud providers will tell you that their standard machine instances let cloud users select the best configurations for their workloads. Clearly, that’s not true. What the public cloud providers should do is build mechanisms that automatically configure the machine for the exact right amount of resources for the workload at hand: autosizing. If a platform is running a workload, it should be able to atomically profile that workload and configure that machine for the workload’s exact needs.
Yes, cloud providers already offer autoscaling and autoprovisioning, and that’s great. But they don’t address machine sizing.
The cloud providers should be able to offer autosizing of machine instances, with a little work. We already have infrastructure as code, where the applications themselves dynamically configure the resources they need. The same concept should be applied to machine instances, so users don’t have to guess. After all, they’re not the cloud infrastructure experts — the providers are.
Microsoft has published its own distribution of FreeBSD 10.3 in order to make the OS available and supported in Azure.
Jason Anderson, principal PM manager at Microsoft’s Open Source Technology Center says Redmond decided to cook its own distribution because “In order to ensure our customers have an enterprise SLA for their FreeBSD VMs running in Azure, we took on the work of building, testing, releasing and maintaining the image”.
Microsoft did so “to remove that burden” from the FreeBSD Foundation, which relies on community contributions.
Redmond is not keeping its work on FreeBSD to itself: Anderson says “the majority of the investments we make at the kernel level to enable network and storage performance were up-streamed into the FreeBSD 10.3 release, so anyone who downloads a FreeBSD 10.3 image from the FreeBSD Foundation will get those investments from Microsoft built in to the OS.”
Code will flow both ways: Anderson says “… our intent is to stay current and make available the latest releases shortly after they are released by the FreeBSD Release Engineering team. We are continuing to make investments to further tune performance on storage, as well as adding new Hyper-V features – stay tuned for more information on this!”
Microsoft says it will support its distribution when run in Azure.
Redmond’s rationale for the release is that plenty of software vendors use FreeBSD as the OS for software appliances. That reasoning was behind Microsoft’s 2012 decision to ensure FreeBSD could run as a guest OS under Hyper-V. In your own bit barns, your guest OSes are your own problem. Microsoft clearly decided it needed something more predictable for Azure, although it has in the past allowed custom FreeBSDs to run as cloudy VMs.
Of course Microsoft has also allowed Linux on Azure VMs for years, so news of the FreeBSD effort feels like an effort to ensure the platforms cloud users want are available rather than a startling embrace of open source to rank with Azure’s don’t-call-it-a-Linux-for-switches or the announcement of SQL Server for Linux.
But it’s still just a little surprising to see Microsoft wade into development of FreeBSD: this is not your father’s Microsoft.
One last thing: when Microsoft announced it would ensure FreeBSD runs on Hyper-V, NetApp was one of its collaborators. NetApp knows FreeBSD inside out, because Data ONTAP is built on it. But NetApp is absent from the vendors listed in Microsoft’s announcement of its FreeBSD efforts. Which might put the kybosh on our imagined cloud-spanning software-defined NetApp rigs. ®
We are happy to announce the availability of the Microsoft Azure service resiliency guidance page. We created this page because our customers have asked us for clear guidance on what to do when they experience a service disruption. Since a service disruption can be localized (perhaps an issue on a specific host or even code within a virtual machine) or more general (impacting any number of machines within a region) it is important to understand both the scope of the impact and what you can do if you are experiencing such an event. In general, it is good practice to check to see if the service disruption you are experiencing is localized to a subset of your application, all of your subscription, or the greater region as a whole (the last of which you can easily check by looking at the Azure Status Dashboard). In some cases, you may decide you want to initiate your own failover (either within a region, or to a different region, depending on the scope of impact you are experiencing). To help you understand the impact of a failover, and to help you with your options, we have published a list (by service) of failover options that you can consult and consider as part of your disaster recovery processes.
We have started with some of our most fundamental services: Virtual Networks, Virtual Machines, Storage (blobs, queues, tables and files), Key Vault, Cloud Services and SQL Databases. Each has its own article so that you can search for them easily, though to help (since we know there are a lot of services available), we have also created an index page to help you find the guidance that’s most suited to your service usage. You can find the current list on the Microsoft Azure service resiliency guidance page.
As always, we are interested in your feedback on this page (and the service guidance pages), as well as what we can do to improve them for you and what other resiliency materials you would find helpful. If you have any comments or questions, feel free to leave them in the comments section below or you can email us on the resiliency team at [email protected].
Believe it or not, today’s Tech World isn’t the first time Lenovo has show off a pair of smart sneakers. Just about this time last year, the company revealed a pair of kicks with the unique ability to determine and display their wearer’s mood. How the concept wearable actually worked and why anyone might possibly want to do such a thing weren’t entirely clear, but hey, look, a happy face.
It’s not likely to get as much notice as the new Project Tango handset Lenovo showed off at today’s press event, but the company’s latest take on connected sneakers do appear to be a fair bit more subdued that the product it showed off last year.
We’re still awaiting specifics, but the smart shoes seem to have the sort of fitness data collection would expect from such a wearable, tracking users’ distance and calories, etc. There ‘s also some gaming functionality built into the product – motion tracking, perhaps? – along with LEDs embedded into the soles.
Lenovo CEO Yuanqing Yang presented a pair to Intel CEO Brian Krzanich, “so [he] can walk in the cloud.” [Pause for laughter.]
Hiri is the latest startup trying to fix email. Specifically, the Dublin-based company is targeting workplace email with an array of features that aim to nudge users to change their email behaviour for the better. For it isn’t email that is necessarily broken but the way we all use and abuse it.
Starting with the premise that thoughtless and un-targeted emails fill a very high percentage of your work email inbox, Hiri’s headline feature is the ability for recipients to rate each email they receive, which serves as the basis for your own email score or email analytics.
You’ve given a weekly score based on feedback received relating to clarity, brevity, and tone, and the software’s rating of your overall email behaviour.
The idea, the startup’s CEO and co-founder Kevin Kavanagh told me during a call last week, is to get employees thinking about how they currently use email and to begin to change their behaviour for the better. The weekly score also ensures employees don’t return to bad email habits.
Citing research from the UK’s University of Loughborough that found the average employee spends 2.5 hours a day on their work email and checks its 96 times a day, which equates to every five minutes, Kavanagh says that most employees would benefit greatly from checking email less.
That’s because not only is email a time sinker in itself but the time it takes to recover from having your flow interrupted also adds up. This is probably most extreme for tasks where intense concentration and getting ‘in the zone’ is paramount, such as coding.
Hiri’s second and blunter tool prevents you checking email too often. You are made to wait 30 minutes between inbox visits.
However, my favourite feature of Hiri is the way it asks you to explicitly separate emails you send into two groups. Ones that are just an FYI, and ones that require further action. It’s a very simple idea but one that I think could be incredibly effective.
An unspoken rule of the workplace is that people routinely loop colleagues and superiors into emails so that if the shit hits the fan they can always share or evade responsibility. That’s created an explosion in email, making it easy to miss those that are task-related. Hiri’s simple categorisation feature, which actually renames the CC field, aims to fix this.
Emails you send with Hiri can be labeled as requiring an action, which automatically creates a task for the main recipient(s), a question, or as an FYI.
You can also drag and drop any email you receive into your task list. And when composing an email you are asked to write a subject line last. The thinking here is that writing a subject line is easier and makes more sense after you’ve actually written what you wanted to say in the body of the email.
Of course, the startup’s biggest challenge is going to be penetrating the enterprise, since Hiri needs to be installed right across the workplace for it to be truly useful. That’s something Kavanagh concedes but says the company is tackling on three fronts.
Firstly, Hiri is essentially an email client, compatible with MS Exchange and MS Office 365. In other words, it doesn’t replace a company’s existing email infrastructure.
Secondly, the startup has developed a neat Outlook plugin that measures the amount of time that employees spend on email. The idea is that a company can install this before adopting Hiri to see how bad the problem is first. The plugin also captures the number of times that employees switch from other applications to check their mail.
Lastly, the modestly funded Hiri has some decent backers who no doubt can help open doors in the enterprise. They include a Global Director of Facebook and an EMEA Director of LinkedIn. And most recently, the startup picked up a $1 million seed round from Delta Partners, ACT Venture Capital, and Enterprise Ireland. This brings total funding to $1.6 million.
Back in October 2015, I wrote a blog post discussing our investments regarding FreeBSD running on Hyper-V as a virtual machine. We have done a tremendous amount of work over the past couple of years to make FreeBSD a 1st class VM guest on Hyper-V, enabling performant networking and storage capabilities that for the first time, made it possible to run production FreeBSD workloads in Hyper-V environments. Completing this work at this time made it possible for Microsoft to declare official support FreeBSD as a guest on Hyper-V, meaning customers could call Microsoft Support if needed.
One of our primary reasons for making these investments in FreeBSD on Hyper-V was to enable FreeBSD VMs to run in Azure, as Hyper-V is the virtualization platform for Azure. You may be wondering, “Why is it so important for FreeBSD to run in Azure?” Many top-tier virtual appliance vendors base their products on the FreeBSD operating system. Over the past 2 years, we’ve worked closely with Citrix Systems, Array Networks, Stormshield, Gemalto and Netgate to bring their virtual appliances to the Azure Marketplace, and we’re continuing to work with a long list of others for future offerings. However, if you wanted to run your own FreeBSD image in Azure, your only option so far was to bring a custom image from outside of Azure.
Today, I’m excited to announce the availability of FreeBSD 10.3 as a ready-made VM image available directly from the Azure Marketplace. This means that not only can you quickly bring-up a FreeBSD VM in Azure, but also that in the event you need technical support, Microsoft support engineers can assist.
Here’s how easy it is to get up and going through the Azure portal. Simply click on the +New on the left pane (or the marketplace tile on your dashboard), type “FreeBSD 10.3” in the search text box, and you’re there.
As the above screenshot illustrates, Microsoft is the publisher of the FreeBSD image in the marketplace rather than the FreeBSD Foundation. The FreeBSD Foundation is supported by donations from the FreeBSD community, including companies that build their solutions on FreeBSD. They are not a solution provider or an ISV with a support organization but rather rely on a very active community that support one another. In order to ensure our customers have an enterprise SLA for their FreeBSD VMs running in Azure, we took on the work of building, testing, releasing and maintaining the image in order to remove that burden from the Foundation. We will continue to partner closely with the Foundation as we make further investments in FreeBSD on Hyper-V and in Azure.
What’s Different About the FreeBSD 10.3 Image from Microsoft?
The majority of the investments we make at the kernel level to enable network and storage performance were up-streamed into the FreeBSD 10.3 release, so anyone who downloads a FreeBSD 10.3 image from the FreeBSD Foundation will get those investments from Microsoft built in to the OS. There are some exceptions where we included some important fixes that weren’t complete in time to make the FreeBSD 10.3 release – you can get the details of those additional commits here.
In addition, we have added the Azure VM Guest Agent, which is responsible for communication between the FreeBSD VM and the Azure Fabric for operations such as provisioning the VM on first use (user name, password, hostname, etc) as well as enabling functionality for selective VM Extensions.
What About Older Versions of FreeBSD?
While we support FreeBSD on Hyper-V back to 10.3, we do provide selective ports of some drivers all the way back to 8.4. The FreeBSD on Hyper-V Technet article lists the feature support we have on older versions. Having said that, it’s definitely possible to bring your own FreeBSD VM image from an older version, with the provided ports and installed Azure VM Agent, into Azure for your use, however your mileage may vary in terms of performance and stability. For example, our measured networking throughput on a 10Gb network on FreeBSD 10.1 was 2Gbps. With 10.3, we’ve been able to achieve over 9Gbps in testing
What’s Next?
As for future versions of FreeBSD, our intent is to stay current and make available the latest releases shortly after they are released by the FreeBSD Release Engineering team. We are continuing to make investments to further tune performance on storage, as well as adding new Hyper-V features – stay tuned for more information on this!
Microsoft is getting closer to releasing its big Windows 10 update this summer, but Windows Insider beta testers with the latest build have a new element to try out today. That’s because LastPass has officially released its first browser extension for Edge (after it leaked out temporarily a week ago), saying it’s the first password manager extension on the platform. Support for extensions is necessary if Edge will try to snag users from the Chrome or Firefox browsers they’re used to, and after AdBlock, password management is a big one.
Using a password manager makes it easy to create and access unique passwords for all of your accounts and avoid a Zuckerberg-type situation or password reset emails from Netflix. According to LastPass, the Edge extension should have all the usual features users expect, with the ability to autofill login information, generate random passwords, and check their vault for duplicates. If you’re not in the test program, you’ll have to wait a little longer for extensions to arrive on Edge, but password managers like LastPass, 1Password and more are widely available across other browsers and mobile platforms if you want to try them out now.
Why is your USB drive so slow? If your drive is formatted in FAT32 or exFAT (the latter of which can handle larger capacity drives), you have your answer.
USB drive vendors tend to format their drives at the factory with FAT32/exFAT because every device that can read USB mass storage can read and write to these well-known formats. That includes, but is not limited to: Windows PCs, cell phones, car radios, Linux, and OS X/iOS devices.
Demand for security information and event management (SIEM) technology is high, but that doesn’t mean businesses are running these products and services smoothly.
Hewlett Packard Enterprise on Tuesday stepped up its efforts to develop a brand-new computer architecture by inviting open-source developers to collaborate on the futuristic device it calls "The Machine."
Originally announced in 2014, The Machine promises a number of radical innovations, including a core design focus on memory rather than processors. It will also use light instead of electricity to connect memory and processing power efficiently, HPE says.
A finished product won’t be ready for years still, but HPE wants to get open-source developers involved early in making software for it. Toward that end, it has released four developer tools.
It’s been a tumultuous past year for Hewlett Packard Enterprise but this week the company is unveiling a series of new offerings intended to solidify its standing in the private and hybrid cloud computing market.
Key themes for HPE’s new cloud products are bundling software to make it more easily consumable, packaging that software with optional hardware to create an all-in-one cloud and being able to manage not only new, cloud-native workloads and technologies – such as application containers – but legacy and traditional workloads too.
Box just inked one of its biggest deals in Asia so far as it focuses on international growth. Fujitsu, one of Japan’s largest IT services providers, announced today that it has struck a strategic partnership with the cloud-storage company and will integrate Box into its enterprise software.
Fujitsu will first start using Box to store and manage files sent on communication tools used by its 160,000 employees around the world. The company says the internal use of Box’s services will help it develop new enterprise software, including customer-relationship and enterprise-content management solutions, that it plans to release by March 2017 and market throughout Asia.
Fujitsu will also integrate Box into MetaArc, its new cloud platform, next year. MetaArc includes third-party services (like Box storage), as well as infrastructure and application hosting services. Customer data uploaded to Box will be stored at Fujitsu data centers in Japan. This will help Box appeal to businesses that don’t want to store their data overseas and complements the company’s new plan to offer cloud data centers, called Box Zones, in Ireland, Germany, Singapore, and Japan.
Other partnerships Box has struck to expand internationally include an agreement with IBM that will let Box store data in IBM’s cloud data centers, which are located in 16 countries.
Legal professionals are by their nature a skeptical and cautious lot, but the sharp rise in cloud-based applications being used by enterprises and law firms, as well as recent high-profile law firm security breaches, has many legal professionals reticent about entering cloud engagements.
“The buck stops with the lawyer,” says Michael R. Overly, a partner and intellectual property lawyer focusing on technology at Foley & Lardner LLP in Los Angeles. “You’re trusting the [cloud provider] with how they manage security,” and yet their contract language excuses them from almost all responsibility if a security or confidentiality breach occurs, he says. “One can’t simply go to clients or the state bar association and say the third party caused a breach, so it’s really not our responsibility.”
In this article, I’ll explain how you can customize network routing for Azure virtual machines on, from, and to a virtual network.
How Routing Works by Default
In a normal deployment of virtual machines, Azure uses a number of system routes to direct network traffic between virtual machines, on-premises networks, and the Internet. The following situations are managed by these system routes:
Traffic between VMs in the same subnet.
Between VMs in different subnets in the same virtual network.
Data flow from VMs to the Internet.
Allowing virtual machines to communicate with each other via a Vnet-to-Vnet VPN.
Enabling virtual machines to route to your on-premises network via a gateway (site-to-site VPN or ExpressRoute).
Every subnet in a virtual network is associated with a route table that enables the flow of data. This table can be comprised of three system route rules:
Local Vnet Rule: Every subnet has this rule, which informs virtual machines that there is no hop (gateway) to machines in the same network.
On-Premises Rule: A gateway enables connectivity to other networks outside of a virtual network, such as other virtual networks or the on-premises network(s). Use of local networks defines those networks; consider local networks as your method for defining this kind of rule.
Internet Rule: All traffic that is destined for the Internet is managed by this rule by default.
The Need to Customize Routing
A lot of deployments never require routing customization, but there are scenarios where you might want to adjust the default flow of traffic. The following image depicts a simple design where a virtual network has two subnets. One of these subnets is the frontend, where web services will run in virtual machines. The second subnet is the backend, where more sensitive application and data services will run in virtual machines.
Those who have deployed or secured multi-tier web services will realize that there’s no added security with the following design. By default, all traffic can flow from the web servers in the frontend to the application and data services in the backend via the default local VNet system rule; there is no filtering.
Default system rules with a multi-tier web application in Azure (Image Credit: Microsoft)
Sponsored
One approach might be to enable filtering using Network Security Groups in the Azure fabric. However, Network Security Groups are just a classic, basic firewall port filtering system; there is no application layer inspection, filtering, load balancing, and so on, which we expect in a modern era design.
Azure does allow for the use of third-party network appliances, which are available via the Azure Marketplace. Some of these appliances are familiar brands and many are not. You can deploy an appliance on the virtual network, between the frontend and backend subnets using multiple virtual network cards, but this accomplishes nothing without overriding the system route and forcing traffic to route via the appliance.
The following diagram shows how a user defined route can be created in an Azure routing table to redirect traffic between the frontend and backend subnets via the appliance. Now the appliance can see all traffic and control it as required.
Note that you’d probably still use network security groups to enforce this routing using the Azure fabric.
A user defined route forces traffic through an Azure virtual appliance (Image Credit: Microsoft)
Another scenario is forced tunnelling, where you might allow the frontend subnet to route to the Internet as normal, but you require the backend subnet(s) to route via an on-premises network.
User defined routes forcing traffic via the on-premises network (Image Credit: Microsoft)
User Defined Routes
You can create a route table and associate it with a subnet in a virtual network. You can then create user defined routes based on three criteria:
The destination CIDR: The address, such as 10.10.1.0/18, that represents the destination network that you want to manage routing to.
Nexthop type: This instructs Azure what kind of device will be the first encountered router for this rule.
Nexthop value: This is the IP address of the device specified in the Nexthop type.
Note that a route tabling can be associated with multiple virtual networks, but a virtual network can be associated with only one route table.
Once you add a route table to a subnet, routing is based on a combination of system routes and user defined routes. If you add ExpressRoute to the mix, then BGP routes will also be propagated to Azure. The following order is used to prioritise routes if more than one route is found for traffic:
User-defined route
BGP route (if ExpressRoute is used)
System route
Sponsored
Azure makes routing pretty simple. Now if only Azure could end the decades old Cross-Atlantic debate on the correct pronunciations of route and routing (rowt and rowting in USA, and root and rooting in Europe).
You have a ton of different options for single board computers that can run the likes of Linux or Android. While a board like the Raspberry Pi might be the most popular, it’s certainly not the only one. Over on HackerBoards, they have a massive chart comparing all 81 different boards.
The content below is taken from the original (Hands-On With The BBC Micro:Bit), to continue reading please visit the site. Remember to respect the Author & Copyright.
It’s been a long wait, but our latest single board computer for review is finally here! The BBC micro:bit, given free to every seventh-grade British child, has landed at Hackaday courtesy of a friend in the world of education. It’s been a year of false starts and delays for the project, but schools started receiving shipments just before the Easter holidays, pupils should begin lessons with them any time now, and you might even be able to buy one for yourself by the time this article goes to press.
It’s a rather odd proposition, to give an ARM based single board computer to coder-newbie children in the hope that they might learn something about how computers work, after all if you are used to other similar boards you might expect the learning curve involved to be rather steep. But the aim has been to position it as more of a toy than the kind of development board we might be used to, so it bears some investigation to see how much of a success that has been.
Opening the package, the micro:bit kit is rather minimalist. The board itself, a short USB lead, a battery box and a pair of AAA cells, an instruction leaflet, and the board itself. Everything is child-sized, the micro:bit is a curved-corner PCB about 50mm by 40mm. The top of the board has a 5 by 5 square LED matrix and a pair of tactile switches, while the bottom has the surface-mount processor and other components, the micro-USB and power connectors, and a reset button. Along the bottom edge of the board is a multi-way card-edge connector for the I/O lines with an ENIG finish. On the card edge connector several contacts are brought out to wide pads for crocodile clips with through-plated holes to take 4mm banana plugs, these are the ground and 3V power lines, and 3 of the I/O lines.
It is obvious when compared to other single board computers that this one has been designed with the pocket of a 12-year-old in mind. It’s a robust 1.6mm thick board that is devoid of pins and spiky connectors, and on which care has obviously been taken to ensure as low a profile as possible.
In hardware terms it has an ARM Cortex M0 processor from Nordic Semiconductor, a compass, accelerometer, Bluetooth Low Energy and USB as well as the previously mentioned switches, LEDs, and GPIOs.
To use the device, you have the choice of connecting it to your computer via USB, or to your phone or tablet via Bluetooth Low Energy. Sadly none of our devices support BLE so for this review we’ll be taking the former approach.
All programming is performed through a selection of web-based environments, with code editing and compilation performed online and the resulting binary file arriving as a download before being placed on the micro:bit by the user through the filesystem. Since the micro:bit is also an mbed under the hood we’d expect it to be programmable using the mbed toolchain, however that is beyond the scope of this review.
The development environments are all accessible through the micro:bit website, on which no login is required for writing code. On clicking the “Create code” button you are presented with a choice of four, Code Kingdoms JavaScript, Microsoft Block Editor, Microsoft Touch Develop, and Python. The micro:bit leaflet says you need a PC running Windows 7 or later or a Mac running OS X 10.6 or later, however we encountered no problems using Chromium on a Linux desktop. Each of the different environments has its own flavour and audience, so it’s worth considering them all in turn.
First up is Code Kingdoms Javascript. This is not what you might expect as a Javascript editor, instead it’s a drag-and drop visual coding environment which creates Javascript blocks. On the left are a series of menus containing the available code blocks, in the middle the coding area, on the right a software micro:bit emulator. At the bottom on the left are buttons to run your code in the emulator, save it with your other scripts, or compile and download it to be placed on the micro:bit.
In use, the Code Kingdoms editor is straightforward and intuitive, the code for a simple compass you can see in our screenshot was very quick to assemble as a first effort. Unfortunately though in our browser at least it was extremely slow, at times almost to the point of being unusable. In particular when you wish to remove a code block it starts up an animation of its waste bin opening up which slows the browser to a crawl. It is not a good sign when you load a web page and hear your processor fan spin up.
Following the Code Kingdoms editor is Microsoft’s Block Editor. This is a drag-and drop visual editor in the same vein as the Code Kingdoms editor, except that there is no pretence of building a more traditional coding language and it is a much faster and smoother experience. The interface is broadly similar in layout to the Code Kingdoms editor, except for the compile and run commands which are at the top, above the coding window.
In our screenshot you’ll see a very simple environmental monitor designed to display readings from the micro:bit’s various sensors. Yet again this was a simple and intuitive piece of software to assemble for someone using the environment for the first time.
The third environment is another one from Microsoft, their Touch Develop editor. This is different from the other editors in that it is designed especially for use in touch environments on tablets and phones, so we tested it on an Android phone.
While the Touch Develop editor follows the same idea as the previous two of building code by selecting blocks from menus, it creates something a lot closer to text code, and requires the user to manually enter for example function parameters. We found its help system to be a little difficult on this front, it’s doubtless a useful editor if you know its intricacies but there is quite a learning curve for a first-time user.
The Touch Develop team have made as good a good job of putting a development environment onto a phone screen as they could and it is very usable, however due to the limited screen space it is still a little awkward and crowded. With luck this should be less of an issue for tablet owners.
It is worth pointing out that this editor can be stored as an offline bookmark allowing it to be used without an Internet connection, however it is not clear how any code written in this way might be compiled.
The final editor choice for the micro:bit is Python, in fact a micro:bit build of MicroPython. This editor lacks the software micro:bit emulator, but is much more like the kind of software environment that Hackaday readers will be used to. The main window is a straight text editor ready to type your Python into, and there is no menu of predefined code blocks. Instead there is a comprehensive introduction, tutorial, and documentation of the various micro:bit Python libraries, and once you are armed with those you can step right in and start writing code.
In use if you are happy with Python it is very straightforward. If your code generates any errors they are displayed scrolling across the micro:bit’s LED matrix which can be rather tedious, however at least the errors we generated were informative and led us straight to the points in our compass code which had gone wrong.
Looking at the libraries available in this editor it becomes clear that Python is the most powerful way to control your micro:bit. As well as the simple functions available in the other editors it offers libraries for I2C, SPI, UART, Neopixels and more. It’s immediately obvious that this is where the micro:bit’s “Wow!” hacks are most likely to be created.
Having looked at all the editors, our choices would be Python as the most powerful coding environment for experienced coders, and the Microsoft Block editor as the most useful drag-and-drop environment for beginners. The Code Kingdoms editor is nice but glacially slow, and the Touch Develop editor is a bit fiddly. It’s worth mentioning that all the editors have an option to save code locally, this produces an LZMA-compressed file with raw code in a JSON structure.
Of course, though some of us may benefit from it, this board is not made for Hackaday readers but for children. If it gets the recipe right, in a decade’s time it will be cited by a generation of new graduates as the machine that got them into software, but has it hit the mark? Since the children in question are only now receiving their first lessons it’s a bit early to tell, but the teacher lent us this micro:bit for the review tells us there are only two minor gripes. Not having an on-off switch they go through batteries at a phenomenal rate, and since their failed programs show no LEDs they think they’ve killed it when their software doesn’t work. The first it’s possible the kids will fix themselves by learning to unplug the packs, and perhaps the micro:bit people can fix the second with a software update. If these are the worst things that can be said about it though there can’t be too much wrong with it.
The Start Menu was the most anticipated feature introduced again in Windows 10 update. The Windows 10 Start Menu is very adaptive and customizable, but what if you want to fix a particular Start Menu layout for you as well as for other users of the computer. This post discusses the method to export, import and fix a particular Start Menu layout on Windows 10. Fixing a layout comes with a lot of benefits, it assures uniformity and can also prevent anyone from distorting your fixed Start Menu layout.
Open the ‘System32’ folder located in ‘Windows’ directory. Now click on ‘File’, then click on ‘Open Windows PowerShell as Administrator’.
Now you need to run the following commands for exporting the Start Menu layout:
export-startlayout –path <path><file name>.xml
example: export-startlayout –path C:\layout.xml
The layout will be exported to an XML file and will be saved at the specified path.
We will use this file again while importing this start menu layout, so you may preserve the file for future use.
Import a Start Menu Layout
We will import a Star Menu layout using Group Policy Editor (gpedit). After importing the layout it will be fixed, that is you will not be able to change that layout by moving the tiles around. But you can easily undo changes and again make the Start Menu customizable by following the steps given below.
Press ‘Win + R’ on your keyboard and then type ‘gpedit’ and hit enter.
Once the Group Policy Editor is up and running, navigate to ‘User Configuration’ then to ‘Administrative Templates’ and then to ‘Start Menu and Taskbar’.
Now locate ‘Start Layout’ in the right pane and open the setting.
Click on ‘Enable’ radio button and then in Start Layout File textbox, type in the path to file that we’ve exported earlier. (C:\layout.xml)
Click ‘Apply’ and close everything. Sign Out of your account and then Sign In again.
Now you will not be able to edit start menu layout as it will be fixed and will not allow any changes. You can make the start menu editable once again by disabling the ‘Start Layout’ setting that we’ve enabled in Step 4.
To apply these changes to all users on a computer, you need to repeat all the steps but in Step 2, navigate to ‘Computer Configuration’ instead of ‘User Configuration’.
If you want to update the fixed Start Menu layout, you just need to update the XML file that we exported earlier. You can replace it with another XML file but make sure the file name and path remains the same.
If you have any questions or if you are not able to understand any of the steps, feel free to comment down below.
Yorkshire based e-tailer Electric Radiators Direct have announced the release of the all new Haverland Smartwave – an intelligent electric WiFi radiator range which uses “advanced motion sensor technology” in order to learn your weekly routine to control your heating and conserve energy. We asked them how the system learns and operates and they told us…
The system learns using an infrared sensor. In the first week, it monitors activity in the room to learn your “schedule”. The next week, it will switch the radiators on half an hour before the times you arrived in the room last week. So, if you walked into your living room at 8am on Monday morning last week, this week the radiator will switch on at 7.30am to get the room nice and warm for when you come down for breakfast. The infrared sensor keeps learning as long as the radiator is in learning mode. That means each week it will switch the heating on in accordance with your movements in the previous week.
The SmartWave includes several features designed to account for the real-life unpredictability of our routines. If you come into your room earlier than expected – for instance you’ve got to catch an early train to work in a different office to normal – the infrared sensor will alert the radiator to your presence and it will switch on immediately, so you won’t be shivering over your cornflakes for long. If you don’t enter the room when expected – for instance you’re staying away for work for a few days – the radiators will sense your absence using the infrared sensor, and will switch down to economy mode after a set amount of time. Then, if you still haven’t returned after a period of hours, the radiators will switch to anti-freeze mode – switched off, unless the temperature drops so low that they reactivate to prevent your pipes freezing over. You can toggle the amount of time the radiator takes to switch itself off by setting the radiator to prioritise either “comfort” or “economy”.
Another way to adapt the SmartWave to your real-life movements is to switch between its different modes. The mode used in the above examples is the “self-learning mode”. Once you have used self-learning mode to create a programme, you can switch to manual mode to use the same heating programme every week. If you purchase the SmartWave with a SmartBox, you can tweak this heating programme over the internet using an easy to use heating app. This is the best way to get your preferred comfort/energy saving balance.
Another option is to use the SmartWave on sensor mode. When used on sensor mode, the SmartWave operates whenever it senses your movement – and switches off whenever you leave the room. This means that rooms won’t get heated up in advance of your return, but you won’t waste any energy. A useful option if you’re going away, or are not sure how often you’ll be in the house on a busy day.
The radiators can be controlled over the internet if purchase with a SmartBox. The SmartBox plugs into your router and then allows you to switch radiators on or off, tweak the programming and adjust temperatures over the net using our specially made heating app. The heating app allows you to control all your radiators from one place – you just have to click between the radiators to tweak their individual programming / temperatures. You can even add additional properties to your app and control its radiators from the same place – useful if, for instance, you own multiple holiday lettings. At present this app has not been designed to be integrated with other home automation system, but it does allow you to control all your SmartWaves from one place.
The radiators can be controlled over the internet if purchased with a SmartBox. The SmartBox plugs into your router and then allows you to switch radiators on or off, tweak the programming and adjust temperatures over the net using our specially made heating app. The heating app allows you to control all your radiators from one place – you just have to click between the radiators to tweak their individual programming / temperatures. You can even add additional properties to your app and control its radiators from the same place – useful if, for instance, you own multiple holiday lettings. At present this app has not been designed to be integrated with other home automation system, but it does allow you to control all your SmartWaves from one place.
The Haverland self-learning electric radiators start around £260 for 450W unit and go to £440 for a 1,700W heater.
Want More? – Follow us on Twitter, Like us on Facebook, or subscribe to our RSS feed. You can even get these news stories delivered via email, straight to your inbox every day.
One of the common uses for a Raspberry Pi is a low-cost information display, powering something like a magic mirror or an animated GIF photo frame. FullPageOS is a Raspberry Pi operating system that makes that process a little simpler.
FullPageOS is set up to boot into a full-screen Chromium window on boot. This means if you’re using your Pi to power an information display, you won’t need to go through the process of disabling screen savers, editing display size, and forcing full-screen mode on your own. All you need to do is install FullPageOS on an SD card, then edit a TXT file to include your Wi-Fi network info and the URL you want it to load up. This is a pretty niche little distribution for the Pi, but it should make those dashboards and other HUDs much quicker to set up.
If you are working at a medium or large company, then you are probably the member of several teams. Different companies divide teams differently; sometimes by product or by customer or discipline. Frequently teams do not set up shared spaces or use team communication tools. Too often email becomes the communication tool and sending documents back and forth is considered collaboration. Microsoft has been building new tools for teams to work better together. When planning training for Office 365 consider focusing on teamwork instead of individual productivity.
Tools like Yammer, Office 365 Groups, OneDrive for Business, OneNote, and Office 2016 have all been built with a focus on teamwork. Sharing photos, videos, status updates, or other information typically gets done with email. Unfortunately, this means email becomes a file server and is overloaded with big attachments. IT staff will set a policy that will delete old emails, and this means critical information can get lost and work will need to be redone. Avoid losing work to expiring emails and stop bogging down email servers; instead use OneDrive for Business and OneNote to hold your important information.
Yammer Groups via Microsoft
Sponsored
Many companies believe they are underutilizing the Office 365 subscription they are paying for, but they do not know how to get their employees to change their habits. First, focus training on the team tools people will be using every day instead of mentioning supporting technology like OneDrive for Business or SharePoint. Most people do not care about the names or technical details concerning their work tools; they are just interested in getting their job done. While SharePoint is an amazing technology, it can be confusing to explain and cumbersome to train people on.
Luckily there are products like Office 365 Groups and Delve, which make finding, saving, and creating content on SharePoint and OneDrive for Business easy. Yes, some training on OneDrive for Business and SharePoint is important, but it should not be the focus or the lead. Instead spend time on workflows that drive teamwork while reducing email like Yammer Groups or how to share documents so you can simultaneously co-author them.
These new cloud connected tools enable truly up-to-date project planning and communication. Work tends to follow the path of least resistance. Rarely people take the more difficult path to end up at the same result. There is value in teamwork, however, and working in a team may use different tools than working alone. This means although people think of Office as a productivity tool for individuals, it becomes a tool for teams when OneDrive for Business or SharePoint is facilitating the sharing in the background.
Sponsored
New tools can frequently be so daunting people do not know the first question to ask. When the IT staff sends out an email wondering why people are not using SharePoint, it may turn into an Emperor’s New Clothes situation. To drive adoption of new tools, let the focus be on the new timesaving workflows and teamwork instead of the tools. Showing teams how to work together with the cloud will be more effective than a mass email telling people to use SharePoint.
The Internet of Things is no good without a way to act on the data it generates. A new partnership between two of the biggest IoT players promises to put smart collection and advanced analysis of data right where it’s needed.
IBM and Cisco Systems have worked out how to run components of IBM’s Watson IoT analytics on Cisco edge devices. This will bring more intelligence closer to where the action is, helping enterprises run things like factories and oil rigs more efficiently.
More than 35 teams competed in the first OpenStack app hackathon, in Taipei in March 2016. The winning team was flown to the Austin Summit and featured in a keynote presentation.
Soon you’ll have even more options to log onto Windows 10 quickly and securely. Microsoft just announced that it’s opening up the Windows Hello Companion Device Framework to other companies, which means their devices will let you hop into Windows just as easily as Microsoft’s Band. On stage at Computex today, a Microsoft representative used the Nymi band, an authentication wearable for the workplace, to log into her computer. You can also expect to see things like ID cards, phones and potentially other wearables working together with Windows Hello.
We’ve already seen Windows Hello-compatible facial recognition cameras from Tobii, but today’s news goes even further. Microsoft says the Windows Hello framework supports enterprise-grade two-factor authentication, so perhaps it’s something your employer will eventually support.
In this post I’ll show how you can install and upgrade the Azure PowerShell modules. I’ll also show you how you can log into Azure and select a subscription to work with.
Service Management in Classic Azure
If you are still working with classic Azure, then these are the instructions to follow. There are two ways to install the Azure PowerShell module. The first is to use the Web Platform Installer. This GUI-based tool will download and install the necessary components. Yes, it gives you a UI, but it is a slower method for installing the module.
Installing the Azure PowerShell module using the Web Platform Installer (Image Credit: Aidan Finn)
The second method is to use PowerShell. You can run this quick one-liner, using an elevated PowerShell prompt, to download and install the latest version of the PowerShell module:
Install-Module Azure
Microsoft updates Azure quite frequently, and this has an impact on PowerShell. You should consider checking for an update very frequently, and if you do notice strange behaviour, then check for an update.
The Install-Module Azure cmdlet is the quickest way to download the latest version of the module.
Tip: Microsoft suggests that sometimes a PC will have report that Azure PowerShell cmdlets cannot be found after a new installation. In my experience, I always tell people to reboot their PC to upgrade their module search paths after a new installation.
Once you are ready, launch the Azure console. You can log into Azure V1 or Service Management using Add-AzureAccount. A pop-up window will appear; sign in to Azure using your credentials.
If you have access to more than one Azure subscription (you should check just in case), then run the following cmdlet. This is also a good way to verify that:
You are signed in OK; sometimes your cached credentials get messed up and you have to clean things up with Remove-AzureAccount.
That you have access to the Azure cmdlets.
Get-AzureSubscription
Note the Current value of the returned subscription(s); one subscription will have this set as “True”; this is the subscription that your cmdlets are currently targeting. If you need to switch Azure subscriptions, then identify the subscription ID of the subscription and run the following cmdlet:
Select-AzureSubscription -SubscriptionId <ID>
I prefer to use the SubscriptionID flag instead of the SubscriptionName because subscriptions can have identical names, but they cannot have identical GUIDs.
Now you can start working with your subscription using Azure V1 cmdlets.
Sponsored
Azure Resource Manager (ARM)
Managing Azure via ARM requires a different ARM module. If you do an online search, you will find lots of outdated instructions on how to install this module and log in. Microsoft simplified the process after general availability, and it’s now similar to what you do with Azure V1.
You can install the ARM module using the following from an elevated PowerShell prompt. This is also how you will check for updates and upgrade the module:
Install-Module AzureRM
Installing the Azure ARM PowerShell module (Image Credit: Aidan Finn)
ARM and Service Management are separate engines so logging into one system doesn’t carry over to the other. Log in by running:
Login-AzureRmAccount
If you are familiar with Service Management cmdlets, then you can probably guess their ARM equivalent. You stick an RM after Azure. So Get-AzureSubscription becomes:
Get-AzureRmSubscription
And by using that method, Select-AzureSubscription becomes:
Select-AzureSubscription -SubscriptionId <ID>
Sponsored
Using Azure Active Directory
If you are working with Azure AD, then you’ll have a little bit more work to do.