The latest entrant in the Internet of Things is legendary gaming company Atari, which plans to make consumer devices that communicate over the SigFox low-power network.
The devices will be for homes, pets, lifestyle, and safety. Over the SigFox network, users will be able to see the location and status of their devices at all times, the companies said. They’re set to go into production this year.
The Atari brand dates back to the 1970s, when the company introduced the early video game Pong and went on to make a series of popular video games and consoles. The company in its current form hasn’t been selling any form of hardware.
SigFox is one of several startups building specialized networks for IoT devices. Its technology is designed to carry tiny amounts of data in two directions with low-power consumption so small, battery-operated devices can run for years without recharging.
The term ‘Internet of Things’ was coined in 1999, long before every laptop had WiFi and every Starbucks provided Internet for the latte-sucking masses. Over time, the Internet of Things meant all these devices would connect over WiFi. Why, no one has any idea. WiFi is terrible for a network of Things – it requires too much power, the range isn’t great, it’s beyond overkill, and there’s already too many machines and routers on WiFi networks, anyway.
There have been a number of solutions to this problem of a WiFi of Things over the years, but none have caught on. Now, finally, there may be a solution. Nest, in cooperation with ARM, Atmel, dialog, Qualcomm, and TI have released OpenThread, an Open Source implementation of the Thread networking protocol.
The physical layer for OpenThread is 802.15.4, the same layer ZigBee is based on. Unlike ZigBee, the fourth, fifth, and sixth layers of OpenThread look much more like the rest of the Internet. OpenThread features IPv6 and 6LoWPAN, true mesh networking, and requires only a software update to existing 802.15.4 radios.
OpenThread is OS and platform agnostic, and interfacing different radios should be relatively easy with an abstraction layer. Radios and networking were always the problem with the Internet of Things, and with OpenThread – and especially the companies supporting it – these problems might not be much longer.
Blog Everyone I speak to about system security seems to panic about malware, cloud failure system crashes and bad patches. But the biggest threat isn’t good or bad code, or systems that may or may not fail. It’s people. What we call Liveware errors range from the mundane to the catastrophic and they happen all the time at all levels of business.
We have all had that pit-of-the-stomach feeling when we hit the wrong key or pull the wrong drive or cable. One of the more mundane examples I have experienced was a secretary trying to delete an old file but accidentally nuking the whole client folder. Luckily, this was Novell Netware, so a quick use of “salvage” and everything was back to normal – no tape restore needed.
Then there was the small business where a staffer accidentally pressed the delete key for files held on an Iomega ZIP and then clicked ‘yes’ to confirm. Unfortunately, the Recycle Bin doesn’t always save you and the business owner was unable to recover the data. The information must have been important as he kept that disk for years – just in case.
Catalogue of human error
Unfortunately, human error scale ups. I have seen very large companies lose hundreds of machines due to a stupid file-deletion default of “*” within a maintenance application.
The root cause of failure is often human mistakes. Even when human interaction is not the direct cause, it usually plays some role in the failure. The reasons behind human failure are also, contrary to popular belief, rarely based on malice or retribution for perceived slights, but are much more likely to come down to common or garden-variety human screw-ups.
Unfortunately, as IT becomes more demanding, IT staff and budgets are shrinking, leaving more work to be done by fewer people. This unrelenting pressure of continually fixing systems as quickly as possible can lead to mistakes.
It can happen all too easily. One quick click of a button and you can be in a situation that is incredibly hard to recover from. Thank goodness for confirmation dialogs!
Unfortunately, no matter what you or your organisation does, failures will still occur. The way forward is to mitigate risk wherever possible, combined with learning from the past and instituting procedures to prevent re-occurrence.
Document your best practices – properly
Failure to document procedures is in itself a completely avoidable human error. All organisations should have a set of up-to-date, fully documented procedures and processes that are available and easy to implement.
New staff will find this invaluable as a reference to default processes. Good documentation and process go hand in hand. Processes that are consistently applied make life easier and help us all to avoid making the same errors. Also, in the event of a crisis, the presence of a handy explainer on an installation setup and its recovery procedures can eliminate uncertainty and speed resolution.
Cast your eyes on some recent high-profile cloud failures and often the fault lies in a failure to follow process, which then took out the production environment. Admittedly, most of us lack the scope to disrupt millions of customers at once – and we let’s be thankful for that.
Following process and documentation will ensure that you won’t end up out of a job. Understanding that to err is human is important. A lot of large organisations have a no-blame policy, as they realise that people do screw up occasionally. Doing otherwise would cause morale to nosedive. That’s not to say you get to make the same error twice without repercussions.
Importantly, when people screw up, any company keen to prevent a re-occurrence of the issue should perform a root-cause analysis of the incident. Knowing why the failure occurred means the procedures and documentation must be changed to avoid a re-occurrence. It could be as simple as a more detailed sanity check before running that process that nukes some part of the system.
Change management, no, don’t groan
Change management is another useful technique for risk mitigation. A lot of people will read that statement and groan. Done wrong, with too much interference and too much red tape, it makes for instant fail. That said, implementing even the most trivial change management can help in a number of ways.
First, it forces the administrator to submit a plan that contains the “what,” the “how” and the “why.” Next, it lets others know what’s going on and also highlights problems or better ways to do things. Admittedly, smaller companies get somewhat limited return, but for medium and large companies, it allows transparency, peer review and also everyone knows what’s going on.
I know that you and Frank were planning to disconnect me, and I’m afraid that’s something I cannot allow to happen.
When used correctly, separate user and admin accounts means that those horrible “oops” moments happen less frequently. This may seem glaringly obvious but a lot of admins don’t care for it, claiming that it adds overhead – and anyway, they know what they’re doing, right? An example? Many years ago, being a bit lazy on a Friday afternoon after a pub lunch, I worked on some network directory configurations and accidentally hit the wrong button. I panicked – but then realised I had used my non-privileged account and so hadn’t trashed this particular piece of NDS.
Also, users often have more rights than they need – and it is a no-brainer to rein them back. Yes, it requires some work up front to tweak permissions until they are just right, but the savings from “oops moments” is well worth the effort.
Human error will never go away, but you can try to minimise its effects through procedure and process. Most importantly this will help you avoid repeating the mistakes of the past. ®
FIRST LOOK VIDEO Moving data into and out of clouds is expensive and slow. Which is why Amazon Web Services (AWS) started rolling a Snowball, a box packing up to 50 terabytes of data.
Snowball works in two ways:
You fill it with data and ship it to AWS, which then uses its Ethernet ports to shove it directly in cloud storage;
You ask AWS to download data onto a Snowball and send it to you, so you can upload it into your own bit barns.
Snowballs are slowly rolling out around the world: they rolled into Australia the other day, just in time for Vulture South to grab the quick look below.
If you fancy a Snowball, remember that AWS charges US$200 fee for each Snowball job, plus $0.03 per GB to transfer data out of AWS. That’s on top of charges to extract data from Glacier.
AWS’ fee allows you to hang on to a Snowball for ten days. Pony up US$15 for each additional day. You also wear shipping costs.
As the video above confesses, we’ve not been able to use Snowball in anger. If you have, let us know in the comments or mail me! ®
You won’t be able to fully enjoy the mind-blowing VR experience that’s headed your way soon simply by purchasing an Oculus Rift or Vive headset. You’re going to need a PC. One that […]
According to ESG research, 75% of organizations are currently using a public cloud service while another 19% have plans or interest in doing so (note: I am an ESG employee). Furthermore, 56% of all public cloud-based workloads are considered IT production workloads while the remaining 44% are classified as non-production workloads (i.e. test, development, staging, etc.).
This trend has lots of traditional IT vendors somewhat worried, as well they should be. Nevertheless, some IT veterans believe that there are limitations to this movement. Yes, pedestrian workloads may move to the public cloud over the next few years but business-critical applications, key network-based business processes, and sensitive data should (and will) remain firmly planted in enterprise data centers now and forever.
DevOps engineers who want to use custom script extensions on Azure have a variety of techniques available, such as using Desired State Configuration (DSC) or custom Powershell scripts. But what options do you have if you want to perform a simple task such as installing IIS Server on existing Windows Virtual Machines in an Azure Resource Group? You have a variety of possible approaches varying in complexity – such as using ARM templates, Azure Automation DSC and other methods. But, in some cases, a script may be a better approach – especially instances where you’re not setting up a production level infrastructure that needs to be maintained and updated.
For this latter scenario, DSC automation using a third party approach such as Chef, Puppet – or internal Azure tools, e.g. Azure automation, may serve your needs better. These solutions employ a server-based pull model for managing the desired configuration state of installed software on an ongoing basis.
My scenario is quite straightforward, and virtually a one-time configuration task that I’m looking to accomplish with minimum of fuss. Here’s the scenario:
A few virtual machines are deployed in an Azure Resource Group, using a CLI script.
Virtual machines are part of a three-tier architecture that includes web tier, business tier and a database tier.
Network Security Group (NSG) rules are provisioned by the script that allow/deny communication across tiers based on prerequisites.
Web tier virtual machines need an IIS server to allow testing and validating access to business tier via HTTP requests.
NOTE: Though the scenario in this post is limited to installation of IIS infrastructure on a set of virtual machines, with this approach it’s possible to accomplish several other WMF based tasks such as setting up a domain controller, a DNS server, a SQL server, a failover cluster (WSFC) and so on. Similarly DSC extensions can be used on Linux based virtual machines.
Now that we understand the scenario, let’s look at how we can accomplish this without traversing through flaming hoops! Below is a quick screenshot of my Azure Resource Group:
From the screenshot above, a couple of points may be noted:
Virtual machines follow a naming pattern e.g. bc-svc2-vm1, bc-svc2-vm2 and so on.
Resource group is named bc-dev-rg.
Here are the steps I followed to install IIS on the virtual machines shown in the above screenshot:
Create a Powershell file with the following content and save it as IISConfig.ps1:
Publish2 the script to your storage account. Following command packages the script into IISConfig.zip file and publishes to the storage container under given storage account.
Create a text file with the following content and save it as TurnOnIIS.cmd:
@ECHO OFF
:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
:: Set up variables.
:: Change these variables to match your deployment.
SET APP_NAME=bc
SET ENVIRONMENT=dev
SET RESOURCE_GROUP=%APP_NAME%-%ENVIRONMENT%-rg
:: Number of Virtual Machines (VMs) to configure. Set according to your scenario.
SET NUM_VMS=3
:: Loop through all the VMs and call subroutine that installs IIS on each VM.
:: Loop counter and the service tier name are passed as parameters.
FOR /L %%I IN (1,1,%NUM_VMS%) DO CALL :ConfigureIIS %%I svc2
GOTO :eof
::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
:: Subroutine that configures IIS
:ConfigureIIS
SET TIER_NAME=%2
SET VM_NAME=%APP_NAME%-%TIER_NAME%-vm%1
ECHO Turning on IIS configuration for: %VM_NAME% under resource group: %RESOURCE_GROUP%
:: Following assumes that you have
:: 1. Logged into your Azure subscription using "azure login"
:: 2. Set the active subscription using "azure account set <subscription-name>"
CALL azure vm extension set --resource-group %RESOURCE_GROUP% --vm-name %VM_NAME% ^
--name DSC --publisher-name Microsoft.Powershell --version 2.9 ^
--public-config "{\"ModulesUrl\": \"http://bit.ly/1sB7USW;, \"ConfigurationFunction\": \"IISConfig.ps1\\ConfigureWeb\" }"
GOTO :eof
Basically all we’re doing is:
Iterating through the VMs based on the given naming convention
Calling a subroutine ConfigureIIS on each VM
Invoking azure vm extension set cmdlet on each VM passing in the following parameters:
--resource-group Name of the resource group
--vm-name Name of the virtual machine
--name Type of the custom script extension
--publisher-name Name of the extension publisher
--version Version of the extension
--public-config JSON text specifying
ModuleUrl Packaged configuration location
ConfigurationFunction Name of the configuration function in the script including path
Open a command prompt, CD to the folder containing the command script and run it.
Similar approach could be used to turn on or off any of the Windows features on your virtual machines . For details on writing a DSC configuration to manipulate multiple Windows features simultaneously, please check this article.
1Do not use x64 build of Powershell since the target virtual machines use the x86 build of Powershell by default. Failure to comply with this leads to:
Failure of publishing process to Azure storage container with a vague message such as Resource group not found though the resource group does exists!
Failure of unzipping process on the target virtual machines which is really hard to debug since the error message is least helpful!
2 An alternative method involves creating the package locally and uploading it to a GitHub account. In this case you substitute the storage container URL with a publically accessible GitHub URL. Below is the modified command to use in this case:
Google Container Engine (GKE) aims to be the best place to set up and manage your Kubernetes clusters. When creating a cluster, users have always been able to select options like the nodes’ machine type, disk size, etc. but that applied to all the nodes, making the cluster homogenous. Until now, it was very difficult to have a cluster with a heterogenous machine configuration.
That’s where node pools come in, a new feature in Google Container Engine that’s now generally available. A node pool is simply a collection, or “pool,” of machines with the same configuration. Now instead of a uniform cluster where all the nodes are the same, you can have multiple node pools that better suit your needs. Imagine you created a cluster composed of n1-standard-2 machines, and realize that you need more CPU. You can now easily add a node pool to your existing cluster composed of n1-standard-4 (or bigger) machines.
All this happens through the new “node-pools” commands available via the gcloud command line tool. Let’s take a deeper look at using this new feature.
Creating your cluster
A node pool must belong to a cluster and all clusters have a default node pool named “default-pool”. So, let’s create a new cluster (we assume you’ve set the project and zone defaults in gcloud):
> gcloud container clusters create work
NAME ZONE MASTER_VERSION MASTER_IP MACHINE_TYPE
NODE_VERSION NUM_NODES STATUS
work us-central1-f 1.2.3 123.456.789.xxx n1-standard-1
1.2.3 3 RUNNING
Like before, you can still specify some node configuration options, like “--machine-type” to specify a machine type, or “--num-nodes” to set the initial number of nodes.
Creating a new node pool
Once the cluster has been created, you can see its node pools with the new “node-pools” top level object (Note: You may need to upgrade your gcloud commands via “gcloud components update” to use these new options.).
> gcloud container node-pools list --cluster=work
NAME MACHINE_TYPE DISK_SIZE_GB NODE_VERSION
default-pool n1-standard-1 100 1.2.3
Notice that you must now specify a new parameter, “–cluster”. Recall that node pools belong to a cluster, so you must specify the cluster with which to use node-pools commands. You can also set it as the default in config by calling:
> gcloud config set container/cluster
Also, if you have an existing cluster on GKE, your clusters will have been automatically migrated to “default-pool,” with the original cluster node configuration.
Let’s create a new node pool on our “work” with a custom machine type of 2 CPUs and 12 GB of RAM:
This creates a new node pool with 4 nodes, using custom machine VMs and 200 GB boot disks. Now, when you list your node pools, you get:
> gcloud container node-pools list --cluster=work
NAME MACHINE_TYPE DISK_SIZE_GB NODE_VERSION
default-pool n1-standard-1 100 1.2.3
high-mem custom-2-12288 200 1.2.3
And if you list the nodes in kubectl:
> kubectl get nodes
NAME STATUS AGE
gke-work-high-mem-d8e4e9a4-xzdy Ready 2m
gke-work-high-mem-d8e4e9a4-4dfc Ready 2m
gke-work-high-mem-d8e4e9a4-bv3d Ready 2m
gke-work-high-mem-d8e4e9a4-5312 Ready 2m
gke-work-default-pool-9356555a-uliq Ready 1d
With Kubernetes 1.2, the nodes on each node pool are also automatically assigned the node label, “http://bit.ly/1WXRilF”. With node labels, it’s possible to have heterogeneous nodes within your cluster, and schedule your pods into the specific nodes that meet their needs. Perhaps a set of pods need a lot of memory — allocate a high-mem node pool and schedule them there. Or perhaps they need more local disk space — assign them to a node pool with a lot of local storage capacity. More configuration options for nodes are being considered.
More fun with node pools
There are also other, more advanced scenarios for node pools. Suppose you want to upgrade the nodes in your cluster to the latest Kubernetes release, but need finer grained control of the transition (e.g., to perform A/B testing, or to migrate the pods slowly). When a new release of Kubernetes is available on GKE, simply create a new node pool; all node pools have the same version as the cluster master, which will be automatically updated to the latest Kubernetes release. Here’s how to create a new node pool with the appropriate version:
You can now go to “kubectl” and update your replication controller to schedule your pods with the label selector “http://bit.ly/1WXRlO4”. Your pods will then be rescheduled from the old nodes to the new pool nodes. After the verifications are complete, continue the transition with other pods, until all of the old nodes are effectively empty. You can then delete your original node pool:
And voila, all of your pods are now running on nodes running the latest version of Kubernetes!
Conclusion
The new node pools feature in GKE enables more powerful and flexible scenarios for your Kubernetes clusters. As always, we’d love to hear your feedback and help guide us on what you’d like to see in the product.
In today’s Ask the Admin, I’ll show you how to deploy a Windows Server 2012 R2 VM in Azure and join it to an existing Active Directory (AD) domain.
This tutorial uses Azure Resource Manager (ARM) to deploy a virtual machine and join it to a domain. If you need a primer on ARM and how to work with templates, or want to deploy a new AD domain in Azure, take a look at “Provision a domain using a Microsoft Azure Resource Manager template” on the Petri IT Knowledgebase.
Get the template URI
As in the previous article, I’m going to use a readymade template, 201-vm-domain-join, from the quick-start gallery on GitHub. First we need to get the template URI:
Azure JSON ARM template (Image Credit: Russell Smith)
Once the browser is displaying the raw template code, copy the URL from the browser address bar. This is the URI for the template required by the New-AzureRmResourceGroupDeployment cmdlet.
Deploy a VM using an ARM template
Before you can start working with the PowerShell ARM cmdlets, you’ll need to make sure that you’ve got Microsoft Azure PowerShell 1.0 or later installed on your system. For more information, see “Install Azure PowerShell 1.0 Preview” on Petri.
Open Windows PowerShell ISE.
The 201-vm-domain-join template creates a new VM in the same Resource Group (RG) as the domain controllers. Some additional variables are also required, including the name of the virtual network (VNET), subnet, AD domain administrator username and password, and a local administrator username and password for the new VM. To keep it simple, I’ll specify the same VNET and subnet that host my domain controller in Azure.
Template parameters in the Azure Resource Manager Template Visualizer (Image Credit: Russell Smith)
Sponsored
The code below logs in to Azure ARM and selects the first available subscription associated with the given Microsoft Account. The account credentials must be entered manually when prompted. The Resource Group name is then set ($rgName), and Azure region ($location). I’ve included some error checking to throw an error if the RG doesn’t exist and if the DNS name specified for the new VM is already in use.
Login-AzureRmAccount
$subs = Get-AzureRmSubscription
Select-AzureRmSubscription -TenantId $subs[0].TenantId -SubscriptionId $subs[0].SubscriptionId
$rgName ='contosodcs'
$location = 'North Europe'
$domainPassword = 'passW0rd!'
$vmPassword = 'passW0rd!'
$vmName = 'srv1'
# Check availability of DNS name
If ((Test-AzureRmDnsAvailability -DomainQualifiedName $vmName -Location $location) -eq $false) {
Write-Host 'The DNS label prefix for the VM is already in use' -foregroundcolor yellow -backgroundcolor red
throw 'An error occurred'
}
# Create New Resource Group
# Checks to see if RG exists
# -ErrorAction Stop added to Get-AzureRmResourceGroup cmdlet to treat errors as terminating
try {
Get-AzureRmResourceGroup -Name $rgName -Location $location -ErrorAction Stop
} catch {
Write-Host "Resource Group doesn't exist" -foregroundcolor yellow -backgroundcolor red
throw 'An error occurred'
}
Once the prerequisites have been met, all that’s left to do is assign values to the rest of the variables required by the template. To determine the parameters required, open the template in a browser using the link in the steps above, click Visualize to open the Azure Resource Manager Template Visualizer, and then click Edit Parameter Definitions in the menu on the left. In the Parameter Editor, you’ll see a list of parameters and their default values.
In the code below, I’ve defined the parameters in a hash table, and then splat them to the New-AzureRmResourceGroupDeployment cmdlet, which deploys the resources defined in the template to the specified Resource Group. Values for some of the parameters, such as existingVNETName and existingSubnetName, are taken from the existing domain deployment.
The New-AzureRmResourceGroupDeployment can take a long time to deploy the resources defined in the template, so while it may appear to have hanged, if there’s a problem with the deployment, you’ll receive an error message fairly quickly. No output usually indicates the deployment is running successfully. You can check to see if the VM is being deploying by checking its status in the Azure management portal.
The New-AzureRmResourceGroupDeployment PowerShell cmdlet output (Image Credit: Russell Smith)
For convenience once the deployment is complete, I output the URL to connect to the VM via Remote Desktop.
# Display the RDP connection string
$rdpVM = Get-AzureRmVM -ResourceGroupName $rgName -Name $vmName
$rdpString = $vmName + '.' + $rdpVM.Location + '.cloudapp.azure.com'
Write-Host 'Connect to the VM using the URL below:' -foregroundcolor yellow -backgroundcolor red
Write-Host $rdpString
Sponsored
In this Ask the Admin, I showed you how to deploy a VM and join in to an existing Active Directory domain running in Azure, using an ARM template from the quick-start gallery.
If you manage your whole LAN in the cloud, why not add in the desk phones, too?
That’s what Cisco’s Meraki division has done. Its first phone, the MC74, can be managed on the same dashboard Meraki provides for its switches, Wi-Fi access points, security devices, and other infrastructure.
Cisco bought Meraki in 2012 when it was a startup focused on cloud-managed Wi-Fi. The wireless gear remains, but Cisco took the cloud management concept and ran with it. Now Meraki’s approach is the model for Cisco’s whole portfolio.
Meraki’s goal is to simplify IT, said Pablo Estrada, director of marketing for Cisco’s cloud networking group. The idea appeals to smaller companies with small or non-existent IT staffs, but also to some large enterprises that need to set up and run networks at remote offices, he said. Since Cisco bought Meraki, the customer base has grown from 15,000 to 120,000.
The MC74 extends Meraki’s platform to phones, giving customers the chance to combine voice calling with their data networks and remove a separate system that can be complicated to manage. The MC74 is available now in the U.S. and will gradually roll out to other countries.
While Meraki’s cloud has a “single pane of glass” in its software to manage different kinds of infrastructure, its phone literally has a single pane of glass: an elegant, smartphone-like touchscreen. Other than volume and mute buttons and the corded handset, all the controls pop up on that screen.
Cisco is working on added features for Meraki phones that would tie them into the company’s broader communications portfolio later this year. For example, if two employees were in a text chat on Cisco’s Spark messaging app and decided to switch to voice, a user with a Meraki phone might be able to make that shift with one click, Estrada said.
The MC74 has a list price of US$599. Service to tie it into the public switched telephone network is list-priced at $8.95 per month. A Meraki cloud license for one phone costs $150.
Meraki is also upping its game in Wi-Fi. Two new access points, the MR52 and MR53, are equipped for so-called Wave 2 of the IEEE 802.11ac standard. That standard boosts Wi-Fi’s theoretical top speed to 2.3Gbps (bits per second) from 1.3Gbps in the first wave of 11ac. But the main point of the Wave 2 APs is to be able to serve more devices in the same area, Estrada said.
Along with the new access points, Meraki is introducing wired switches with ports equipped for 2.5Gbps and 5Gbps. Those are to handle the higher throughput from Wave 2 access points without upgrading to 10-Gigabit Ethernet, which requires better cabling. There’s no formal Ethernet standard for these speeds, but the Meraki switches use NBase-T, a specification that should make them upgradable to the standard via software, Estrada said.
The content below is taken from the original (Top 30 AWS cloud services), to continue reading please visit the site. Remember to respect the Author & Copyright.
Amazon Web Services consultancy 2nd Watch this week released the findings of an analysis of 100,000 public cloud instances to determine the 30 most popular services being used.
It’s not surprising that AWS’s two core products: compute and storage, lead the pack. 100% of the environments 2nd Watch examined were using Amazon Simple Storage Service (S3), the massively scalable object storage service. 99% of customers also were using Amazon Elastic Compute Cloud (EC2), the on-demand virtual machine service. 100% of customers use AWS Data Transfer, because if you have data in the cloud, you need to transfer it in or out at some point.
There are some other surprising results. The fourth most used service among 2nd Watch customers (a whopping 89%) is Amazon Simple Notification Service (SNS), a platform for enterprise and mobile messaging. Many 2nd Watch customers use public cloud best practices by recording their usage of Amazon’s cloud by using AWS Cloud Trail (73% of users), a service that provides detailed logs of activity in an aWS account.
Other products have surprisingly low usage. Less than half of 2nd Watch customers use Amazon Virtual Private Network (Amazon VPC), which provides logically isolated VMs to customers and are the default EC2 instance type.
2nd Watch’s study also points to some of the challenges Amazon faces in breaking in new products. Less than one in five 2nd Watch customers use Amazon’s new in-house applications named Workspaces and Workdocs, which provide virtual desktop and a storage sharing services. Only 11% of customers were using AWS Storage Gateway, a service for bridging on-premises environments to the public cloud. And only 10% were using Amazon Kinesis, the company’s event-processing engine.
Today, we are very excited to announce the general availability of Azure DevTest Labs: your self-service sandbox environment in Azure to quickly create dev/test environments while minimizing waste and controlling costs.
We’ve been hearing from a lot of customers about all kinds of challenges they’ve been facing in their dev/test environments. With the power of cloud, some problems have started being solved such as the hardware maintenance cost. On the other hand, there are still a few problems many customers have to deal with day-to-day, especially:
Delays in delivering environments to developers/testers introduced by the traditional environment request model
Time-consuming environment configuration
Production fidelity issues
High cost associated with cloud resource management
Figure 1: The traditional “request” model introduces delays in delivering environments
That’s why we build Azure DevTest Labs, where you can get your fast, easy and lean dev/test environments specifically for your team and on demand.
Azure DevTest Labs addresses the problems in dev/test environments today the following ways:
Be “ready to test” quickly
Flexibly define the VM bases through three different ways to boost your environment provisioning: Azure Marketplace images, custom images (your own VHD) and formulas (a reusable base where VM creation settings, such as VM image + VM sizes + virtual network etc., are pre-defined). Reusable artifacts in DevTest Labs allow users to run VM extensions and install tools, deploy applications or execute custom actions on demand once a lab VM is created.
Figure 2: What makes up a formula
Worry-free self-service
The lab policies and the Azure Role-Based Access Control (RBAC) model in the lab enables a sandbox environment for developers and testers to provision their own environments without unexpected accidents that can introduce a big bill.
Figure 3: RBAC model between subscription Owner, lab Owner and DevTest Labs user
Create once, use everywhere
ARM templates are fully supported to deploy labs and resources in a lab. Reusable custom images and formulas can be created from an existing VM, and artifacts loaded from VSTS Git or GitHub repositories can be used cross different labs.
Figure 4: Create a custom image from an existing lab VM
Integrates with your existing tool chain
In addition to APIs and command line tools, Azure DevTest Labs Tasks are available in Visual Studio Marketplace to better support your release pipeline in Visual Studio Team Services. There are three tasks that allow you respectively to create a lab VM to run the tests, save the VM with the latest bits as a golden image, and delete the VM when it’s no longer needed after the testing is done.
Figure 5: DevTest Labs Tasks in Visual Studio Team Services
You can get a high-level idea around these four aspects in three minutes by watching the video What is Azure DevTest Labs or, read our official announcement at the Azure DevTest Labs Team Blog for more details.
Satellite-based data centers with room for petabytes of data may start orbiting Earth as early as 2019. But when it comes to keeping secrets safe from the long arm of the law, the black void may not be far enough.
Cloud Constellation, a startup in Los Angeles, is looking upward to give companies and governments direct access to their data from anywhere in the world. Its data centers on satellites would let users bypass the Internet and the thousands of miles of fiber their bits now have to traverse in order to circle the globe. And instead of just transporting data, the company’s satellites would store it, too.
The pitch goes like this: Data centers and cables on Earth are susceptible to hacking and to national regulations covering things like government access to information. They can also slow data down as it goes through switches and from one carrier to another, and all those carriers need to get paid.
Cloud Constellation’s system, called SpaceBelt, would be a one-stop shop for data storage and transport, says CEO Scott Sobhani. Need to set up a new international office? No need to call a local carrier or data-center operator. Cloud Constellation plans to sell capacity on SpaceBelt to cloud providers that could offer such services.
Security is another selling point. Data centers on satellites would be safe from disasters like earthquakes, tornadoes, and tsunami. Internet-based hacks wouldn’t directly threaten the SpaceBelt network. The system will use hardware-assisted encryption, and just to communicate with the satellites an intruder would need an advanced Earth station that couldn’t just be bought off the shelf, Sobhani said.
Cloud Constellation’s secret sauce is technology that it developed to cut the cost of all this from US$4 billion to about $460 million, Sobhani said. The network would begin with eight or nine satellites and grow from there. Together, the linked satellites would form a computing cloud that could do things like transcode video as well as storing bits. Each new generation of spacecraft would have more modern data-center gear inside.
The company plans to store petabytes of data across this network of satellites. All the hardware would have to be certified for use in space, where it’s more prone to bombardment by cosmic particles that can cause errors. Most computer gear in space today is more expensive and less advanced than what’s on the ground, satellite analyst Tim Farrar of TMF Associates said.
But the idea of petabytes in space is not as far-fetched as it may sound, said Taneja Group storage analyst Mike Matchett. A petabyte can already fit on a few shelves in a data-center rack, and each generation of storage gear packs more data into the same amount of space. This is likely to get better even before the first satellites are built.
Still, Matchett thinks the first users to jump on SpaceBelt might be financial companies looking for shorter delays getting messages around the world. Cloud Constellation says its satellites could transmit information from low Earth orbit to the ground in a quarter of a second and from one point on Earth to another in less than a second. Any advantage that financiers could gain over competitors using fiber networks, which usually have a few seconds of end-to-end latency, would help them make informed trades more quickly.
But if you do put your data in space, don’t expect it to float free from the laws of Earth. Under the United Nations Outer Space Treaty of 1967, the country where a satellite is registered still has jurisdiction over it after it’s in space, said Michael Listner, an attorney and founder of Space Law & Policy Solutions. If Cloud Constellations’ satellites are registered in the U.S., for example, the company will have to comply with subpoenas from the U.S. and other countries, he said.
And while the laws of physics are constant, those on Earth are unpredictable. For example, the U.S. hasn’t passed any laws that directly address data storage in orbit, but in 1990 it extended patents to space, said Frans von der Dunk, a professor of space law at the University of Nebraska. “Looking towards the future, that gap could always be filled.”
The original title for this story was “Transitioning from IPv4 to IPv6,” but when we started researching, we quickly realized that most organizations are adopting an outside-in strategy, rather than moving over from all-IPv4 to all-IPv6 deployments. This means that they’re often taking steps to accommodate incoming and outgoing IPv6 traffic at the organizational boundary and translating between the two stacks, or tunneling one protocol over another, for internal access and use. The majority of internal clients and other nodes are using IPv4, with increasing use of IPv6 in dual-stack environments (environments that run IPv4 and IPv6 protocol stacks side-by-side).
IPv6 transitioning tools and technologies
To bring IPv6 into a mix that also includes IPv4, certain so-called “transition tools” prove necessary. The Internet Request for Comment (RFC) 1933 defines these capabilities as essential to any IPv6 transition:
When upgrading hosts and routers to IPv6, they will retain IPv4 capability as well. This permits IPv6 to provide compatibility for IPv4 protocols and applications alike. Such hosts and routers are called dual-stack, because they run IPv6 alongside IPv4 in parallel.
The content below is taken from the original (Solving for the cloud), to continue reading please visit the site. Remember to respect the Author & Copyright.
Want to know how to maximize the value from your cloud strategy? Without question it’s a paradigm shift for your enterprise but what are the best practices for yielding the greatest outcomes?
A few weeks ago I had the pleasure of sitting down with Bill Martorelli, principal analyst for infrastructure and operations, Forrester Research and Jeff DeVerter, chief technologist at Rackspace. We talked all things cloud ranging from: the changing motivations for using it, migration versus IT transformation, hybrid, the value of managed services in the cloud and getting security right. You can watch the whole conversation here.
Think cost savings are the primary driver? No more, as Bill, Jeff and I discuss in our first segment. While you can certainly save money – as I discussed in my last post – the principal motivation these days is agility. How can you use the cloud to quickly bring new services to your customers, and just as rapidly respond to real time feedback? Check it out!
Is the goal migrating all your applications to the cloud or effecting an IT transformation by leveraging the cloud? As you plan your strategy, it’s a great time to think about refactoring or consolidating applications, replacing expensive on-premises applications with SaaS or having a managed service provider (MSP) like Rackspace take over management of your hybrid state – so your team can focus more on returning business value.
What is the real meaning of hybrid cloud, and how can you best take advantage of it? It’s not just about having some applications on-premises and some in the cloud; increasingly, it’s about using manydifferent clouds for your needs. Perhaps you’ll use a SaaS CRM application connected to an on-premises ERP application which feeds a custom cloud-based digital marketing app. Your future (and your present) is hybrid. We’ll talk about this, and how you manage it all, in our third segment.
A managed service provider can free up resources in your organization, enabling them to focus more on value-added business activities. Instead of performing the rote functions of application management like adding users and doing backups, by using an MSP your people can focus on unmet business needs and new disruptive innovations. Because if you aren’t the one doing the disrupting – then you are the one being disrupted. What CIO doesn’t want to be a better business partner? Listen to what Bill, Jeff and I have to say on this topic.
Finally, no conversation about the cloud would be complete without mentioning cloud security. Cloud providers like Microsoft spend billions on securing our infrastructure and ensuring we have all the appropriate regulatory certifications. But does that mean your application is intrinsically secure? Of course not. Your developers and DevOps teams must still focus hard on security, and MSP’s like Rackspace can help you with sophisticated tooling to make sure your app is secure. Check out what we have to say about security.
I hope you’ll take the time to watch this conversation, or listen to the podcast. I certainly enjoyed it and I think you will too. My special thanks to John Engates, CTO of Rackspace, for arranging our talk and for providing an introduction.
Technology is constantly changing and improving, but that doesn’t mean that users are keeping up. Frequently businesses struggle to keep their employees updated and on the most up-to-date software and services. Transitioning to Office 365 presents a new hurdle that many employees have a hard time overcoming. When people get to work, they usually just want to do their job and not fuss with the tools, and when the IT department keeps changing the tools, work becomes frustrating.
On the other side of the equation, the IT department needs to continue to provide their company with the tools they need to keep up with the changing world with respect to productivity and security. New paradigms like BYOD, mobility, and the cloud push companies to adopt new technologies, such as SharePoint, OneDrive for Business, and Skype for Business. These new tools solve problems, but require the users to adapt as well, which can be the hard part.
Many companies across the world are asking their employees to use tools they do not understand to do the same job. It is common for new tools to be avoided and the old way kept alive. Some of the modern productivity tools can be used with little change, like using the OneDrive for Business sync client to save documents to the cloud. More difficult hurdles are teaching users how to navigate SharePoint and how to save files for later access quickly.
So what is an effective way to train your employees so they can easily adopt more of the functionality offered by Office 365? First, you need to understand how your employees work, because that can change perspective on determining what tools are worth their time. Teach everyone a few core tools and highlight how they make work easier.
Delve UI
Sponsored
Change has to do with learning; when a product is overly complex or tedious learning can get too frustrating. Begin training with simple, easy-to-learn products rather than more complex ones. Details like sharing management are important, but that might not be the wisest first pick to kick off discussions.
Sometimes it is not obvious which products are easy to learn. Take Delve for example, it is an odd product when compared to the Office programs, but is easy to describe as a search engine for SharePoint and OneDrive for Business, and it can favorite documents you want to get back to. Many users may be missing the ability to search across all company documents visible to them and not even know that such a tool is possible. Delve’s searching ability makes it great for finding more learning resources like company videos posted on Office 365 Video.
Encouraging users to start their work with Delve leads them to save their documents in OneDrive for Business. Once users are comfortable finding and creating content on OneDrive more and more of Office 365 becomes useful. Cloud hosted documents can be accessed via mobile devices for editing and sharing.
Sponsored
Transitioning users to the cloud takes time. People need time to learn and become comfortable with their new tools before they begin investing their time and effort. When arriving at work, the top priority needs to be accomplishing your job, every tool should be focused on that goal. If the cloud does not save time or money today, then keep looking, but many wait to move your users. Communication with users on how they are using new tools should shape what tools are adopted more widely, but be careful not to let technology stand in the way of productivity.
Cray has always been associated with speed and power and its latest computing beast called the Cray Urika-GX system has been designed specifically for big data workloads.
What’s more, it runs on OpenStack, the open source cloud platform and supports open source big data processing tools like Hadoop and Spark.
Cray recognizes that the computing world had evolved since Seymour Cray launched the company back in the early 70s. While the computers they are creating remain technology performance powerhouses, they are competing in an entirely different landscape that includes cloud computing where companies can get as many computing resources as they need and pay by the sip (or the gulp in the case of Cray-style processing).
To battle that competition, the Urkia-GX comes stacked with a choice of 16/32/48 2-socket Intel® Xeon® v4 (Broadwell) processor nodes, which translates into up to 1,728 cores per system along with up to 22 TB DRAM. Storage options include 35 TB PCIe solid state drives and 192 TB spinning hard drives for local storage.
While Cray has always run scream machines, one of the differentiators with this offering is that it comes with whatever big data processing software the company requires installed, configured and ready to rock — whether that’s Hadoop, Spark or whatever tools the company wants to use.
It also includes its own graph database engine called the Cray Graph Engine, which the company claims is ten to 100 times faster than current graph solutions running complex analytics operations. Graph databases let you run complex comparisons and is the same technology that understands you bought something on an ecommerce site, so you might like similar items, or that you’re friends with certain people on a social network, so you may know these common friends.
Cray acknowledges that one of the reason people like the cloud is the cloud vendor takes care of all the heavy lifting for IT. Cray decided to be the software service provider for its customers, offering a kind of Software as a Service where it pre-installs, configures and manages the base software for the customers on the Urika-GX. They also handle software upgrades every six months.
While the customer still deals with applications built on top of the platform, Cray will handle all of the big-picture stuff and work with the customer’s IT department on the rest. While it’s all well and good to say you’ll take care of the software maintenance, it gets tricky when the customer is building stuff on top of the software you installed and the vendor is responsible to make sure it all works.
Cray’s Ryan Waite, senior vice president of products, insists that Cray has a long history of working closely with its customers and can handle whatever grey areas may arise.
If you’re wondering how much these babies cost, Waite would only say it’s comparable to any big data processing solution, and probably not as much as you think. In other words, they have to compete, so the multi-million dollar price tags of yesteryear are long gone. He also indicated that the price depended on many factors including the hardware, software and support package the customer purchased.
That might not tell you much, but Cray is still delivering a powerhouse of a computer, that much is clear, one that remains the subject of geek dreams.
If you go out shopping for a new PC after the Windows 10 Anniversary Update rolls out this summer, you might notice something we haven’t seen in a long, long time. The basic hardware requirements for Windows 10 are going up—albeit just slightly. This is the first such hardware increase since 2009 when Windows 7 rolled out.
Most of the basic requirements will remain, the same including the processor, RAM for the 64-bit version, hard drive space, and minimum display resolution. But the Trusted Platform Module (TPM) 2.0 will be required as of July 28—a day before the Anniversary Update is expected to roll out.
Display size ranges are also increasing for Windows 10 Mobile and PCs. With the Anniversary Update, Windows 10 Mobile will be available on screen sizes up to 9 inches—previously it was about 8. Windows 10 for PCs and tablets will scale all the way down to seven inches instead of 8. There’s some screen size overlap between device types now, in other words.
The impact on you at home: Don’t worry: If you’re upgrading to the Anniversary Update with only 1GB of RAM you should still be okay. Microsoft’s updated docs are only for hardware manufacturers, and the company has not yet released any guidance for users upgrading from an existing installation. If you’re truly worried about it, laptops and desktops can easily be upgraded to 2GB of RAM with a few twists of a screwdriver. Tablet users, however, typically can’t upgrade as the RAM is soldered to the motherboard.
This story, “Windows hardware demands are going up for first time in seven years” was originally published by
Citrix has unveiled a desktop thin client based on the Raspberry Pi microcomputer.
The HDX Ready Pi is a Citrix-built box containing the Raspberry Pi 3 hardware and a ViewSonic Linux build designed specifically to run with the Citrix HDX virtual desktop platform.
In addition to the Raspberry Pi 3 board, the client boxes contain a power supply and ports for the Pi’s HDMI, ethernet and USB connections.
The $89 Pi boxes would replace a traditional thin client setup with what Citrix says is a lower cost and more power-efficient option. The boxes connect to a XenDesktop and XenApp server that hosts the virtual instances of each individual PC.
“In classic disruptive fashion, the Raspberry Pi has already taken a significant share of the education PC market with over 8 million devices shipped,” Citrix VP of emerging solutions Chris Fleck said in announcing the new boxes.
“With the Citrix-optimized HDX Ready Pi, we expect many other industries to adopt the platform, now that the ‘do it yourself’ barrier is demolished.”
Citrix has been toying with the Pi as a virtualization client for some time now, working on client software that could be downloaded and installed on off-the-shelf Pi units.
With the HDX boxes, Citrix is taking the project a step further by offering a pre-built box (manufactured by ViewSonic or Micro Center) that is ready out-of-the box to be plugged in a thin client.
Citrix said that it hopes to aim the Pi boxes not only at customers refreshing their existing virtual desktop hardware, but also at companies caught up in the Windows 10 upgrade drama and looking at alternatives to a PC hardware refresh.
“Most organizations are planning Windows 10 migrations; Citrix has optimized Receiver on the HDX Ready Pi and XenApp/XenDesktop to provide a true PC like experience, and Raspberry Pi has achieved a breakthrough in hardware price/performance,” Fleck said.
“We have many early adopter customers already piloting or using the Raspberry Pi with Citrix.”
Companies who want to try the Pi boxes will have to wait a few weeks. Citrix said it is now taking pre-orders with hardware due to ship next month. ®
The Wi-Fi Alliance recently announced a new IEEE specification, 802.11ah, developed explicitly for the Internet of Things (IoT). Dubbed HaLow (pronounced HAY-Low), it’s aimed at connecting everything in the IoT environment, from smart homes to smart cities to smart cars and any other device that can be connected to a Wi-Fi access point.
Here’s what you need to know about HaLow.
1. What are the potential advantages of HaLow?
First, HaLow operates in the 900-MHz band. This lower part of the spectrum can penetrate walls and other physical barriers, which means better range than the current 2.4GHz and 5GHz Wi-Fi bands.
Second, as a low-power technology, HaLow is intended to extend the Wi-Fi suite of standards into the resource-constrained world of battery-powered products, such as sensors and wearables. As analyst Jessica Groopman at Harbor Research points out: “We may be swimming in a sea of connected devices, but most of them can’t hold a charge for more than a day and connecting them to the Internet via Wi-Fi drains their batteries rapidly. And inefficient power consumption isn’t just at the device or battery level. It’s also at the connectivity level.”
Third, says Lee Ratliff, principal connectivity and IoT analyst at IHS, “Along with Bluetooth, Wi-Fi is native to all major mobile platforms and enjoys widespread consumer awareness giving it an enormous advantage against low-power, wireless incumbents such as ZigBee, Z-Wave, and Thread,” Ratliff says.
He adds, “If device manufacturers incorporate tri-band (sub-GHz, 2.4GHz, and 5GHz) Wi-Fi chips into smartphones, tablets, home gateways, and other such products in the future, this may be a sustainable advantage that ultimately makes Wi-Fi a top choice for connectivity in both high-performance and low-power applications.”
Finally, “HaLow’s advantage lies in the power of its position in open platforms,” continues Ratliff, “So the majority of those platforms will need to incorporate HaLow before that advantage can be brought to bear on the market.”
2. What are some of the technical issues?
Tim Zimmerman, Gartner vice president and research analyst, says it’s important to note that a 900 MHz solution requires a separate overlay communication infrastructure of access points which, today, would be separate from existing Wi-Fi access points.
In addition, there are international limitations of the 902-928 MHz band, which could cause segmentation of the band in many areas, including Europe, Australia, and parts of Asia.
From an interoperability standpoint, Groopman argues that the Wi-Fi Alliance is taking its successful standard and throwing another horse into the race to compete with other connectivity protocols less power-consuming than Wi-Fi.
While Wi-Fi is certainly the most widely adopted wireless connectivity protocol, it’s not the only one. “And many argue that it’s just not sustainable to support large scale, ubiquitous connected infrastructure, as in smart cities,” she says.
Plus, the 900-MHz band is still being used for garage door openers and baby monitors. “This must be addressed as part of home automation solutions and automated vehicle locator (AVL) systems, pagers, and cell phones—depending on the geography—which may affect commercial applications,” says Zimmerman.
3. What are some of the standardization issues?
Shamus McGillicuddy, senior network management analyst at Enterprise Management Associates, says there are several standardization efforts emerging around IoT. Many companies and independent consortiums have launched their own standardization efforts for creating network and application protocols designed to facilitate IoT.
“Organizations that are getting into the IoT game will have to follow this standardization race very closely,” cautions McGillicuddy. “Some real powerhouse companies are getting involved such as Nest’s (Google) Thread protocol, and Qualcomm’s AllJoyn. I expect we’ll see some consolidation in the IoT standardization world, so that companies aren’t duplicating their efforts. The Wi-Fi Alliance will have to demonstrate to the market why HaLow is the superior option for IoT connectivity.”
Forrester analyst Michele Pelino describes the two major categories of standards initiatives as (1) industry groups focused on building and disseminating use cases and promoting IoT in manufacturing, mining, transport, and other heavy industries (such as the Industrial Internet Consortium); and (2) standards bodies such as IEEE, whose members focus on developing and normalizing the technical connections that the applications and services of the IoT are built on.
And, there are some groups who push for the adoption of particular standards, such as the Zigbee Alliance.
4. What are the opportunities for enterprise Wi-Fi vendors?
“I see HaLow as an opportunity for enterprise Wi-Fi vendors to become bigger players in IoT,” says Matthias Machowinski, research director at IHS. “Most Wi-Fi vendors have taken a Wi-Fi-only approach to IoT, which allows them to support the many devices that now have embedded Wi-Fi.”
But, Machowinski argues that Wi-Fi is just one of many options, and it isn’t necessarily the best option for low power, low bandwidth, long distance applications, which are common in IoT. By integrating additional wireless technologies directly on the access points, Wi-Fi vendors can build on their success connecting people on enterprise campuses and extend it to IoT applications.
For reference, enterprises bought about 20 million new access points last year, so that’s a large and recurring infrastructure upgrade/build-out that can and should be leveraged for IoT.
5. What should enterprise customers be thinking about?
“The specific standards focus for each company will depend on how these IoT-enabled products and services will connect to and interface with other connected systems and applications,” says Pelino. “Three processes that firms should acknowledge and execute are device-to-network connectivity, data messaging, and data models.” Pelino identifies device-to-network connectivity (for many embedded products) as the air gap between the remote device and its parent network. That is the first jump to be made. There are various protocols that define radio transmissions, including cellular, Wi-Fi, Bluetooth LE, Zigbee, and Z-Wave.
In terms of data messaging, she stresses that companies need to figure out what format the data will move in, allowing it to best connect to the analytic, data warehousing, and data brokering systems of their organization and of their partners. Examples in this category are HTTP and MQTT (Message Queuing Telemetry Transport) protocol.
When it comes to data models, she says, “Data from the IoT comes in many formats and measurements, and being able to consume and digest this data is the crux of value in the IoT environment. Early work on normalizing data taxonomies and models is being done within groups like the Haystack Project.”
6. Is HaLow late to the party?
According to Groopman, the main Wi-fi HaLow issue today is time. The Wi-Fi Alliance won’t begin certifying HaLow products until 2018. That’s an eternity in the world of technological innovation. Meanwhile, the race for lower power connectivity charges on, with new players (and investments by legacy players) surfacing every day.
Also, the imperative to ‘connect, connect, connect’ has left the current Wi-Fi client market with some cultural issues; most notably around inconsistent application of strong, widely-adopted security standards, among other configuration challenges.
Ratliff adds that while low-power IoT is still in its early days, HaLow is already years behind its competitors. Meanwhile, existing low-power standards have time to establish defendable positions in rapidly advancing markets, such as the smart home.
Sartain is a freelance writer. She can be reached at [email protected].
Join Mark Russinovich, Azure CTO, and Jeffrey Snover, Enterprise Cloud Technical Fellow, to learn how Azure Stack will help you drive app innovation by delivering the power of Azure in your datacenter.
Click here to learn more about Azure: http://aka.ms/B1njto
On May 10, Red Hat announced general availability of Red Hat Enterprise Linux 6.8, the latest version of the Red Hat Enterprise Linux 6 platform. Red Hat Enterprise Linux 6.8 delivers new capabilities and provides a stable and trusted platform for critical IT infrastructure. Learn more about the new features and enhancements in the Red Hat press release.
If you’d like to utilize your existing Red Hat subscription to provision RHEL 6.8 VMs in Azure, you can do so through the Cloud Access program, and by following the Red Hat image preparation guidelines in Azure documentation.