How to Give Friends Emergency Access to Your Online Accounts

The content below is taken from the original ( How to Give Friends Emergency Access to Your Online Accounts), to continue reading please visit the site. Remember to respect the Author & Copyright.

As the year winds down, now is a great time to get your digital life in order. From organizing your online photos to refreshing your accounts with new, secure passwords or finally cleaning up your browser bookmarks; there’s a lot for you to tackle before 2019 hits.

Read more…

How to view, save and clear Command Prompt command History in Windows

The content below is taken from the original ( How to view, save and clear Command Prompt command History in Windows), to continue reading please visit the site. Remember to respect the Author & Copyright.

The Command Prompt is nothing but a black and white command line utility that comes out of the box on Windows 10/8/7. But those who know its true potential, it is a great replacement for many of the users’ third-party […]

This post How to view, save and clear Command Prompt command History in Windows is from TheWindowsClub.com.

‘World’s First Digital Dining Plate’ Is a Feast for the Eyes

The content below is taken from the original ( ‘World’s First Digital Dining Plate’ Is a Feast for the Eyes), to continue reading please visit the site. Remember to respect the Author & Copyright.

If you watch Top Chef or any cooking shows, you know that somehow, food just tastes better when it’s presented beautifully. But who has the time and the skills to prepare a restaurant-worthy gourmet […]

The post ‘World’s First Digital Dining Plate’ Is a Feast for the Eyes appeared first on Geek.com.

10 questions to ask when selecting enterprise IoT solutions

The content below is taken from the original ( 10 questions to ask when selecting enterprise IoT solutions), to continue reading please visit the site. Remember to respect the Author & Copyright.

Todays interenet of things market includes countless consumer gadgets such as routers, Internet-connected video cameras and smart TVs, as well as innumerable enterprise IoT devices for business, scientific and industrial applications.

To read this article in full, please click here

(Insider Story)

Intel unveils a groundbreaking way to make 3D chips

The content below is taken from the original ( Intel unveils a groundbreaking way to make 3D chips), to continue reading please visit the site. Remember to respect the Author & Copyright.

As it’s getting more difficult to cram transistors next to each other in chips, and we near the end of Moore’s Law, the only choice is to go vertical. Literally. That’s the essence of 3D chip design, and it’s the crux of a major Intel announcem… announcement th…

Hybrid Cloud: Microsoft Azure vs Amazon AWS

The content below is taken from the original ( Hybrid Cloud: Microsoft Azure vs Amazon AWS), to continue reading please visit the site. Remember to respect the Author & Copyright.


While there are a number of players in the cloud computing marketplace, it’s perfectly clear that the battle of the titans is between Microsoft Azure and Amazon AWS. Rising from a humble add-on to Amazon’s e-commerce business, Amazon’s AWS has become the undisputed cloud leader in the cloud market. Until recently, Amazon’s primary focus has been on public cloud services and moving workloads from on-premise to the cloud — not the hybrid cloud. However, recent announcements at re:Invent have shown that they are now taking hybrid cloud solutions much more seriously.

Currently, in the number two spot, Microsoft has approached the cloud from a different angle. A longstanding provider of enterprise software solutions, Microsoft entered the cloud market with Azure after Amazon had already developed a substantial lead. However, it didn’t take long for them to realize that it was going to be a long time before businesses moved completely to the cloud so they quickly made the hybrid cloud their primary focus. Driven in a large part by the success of Azure, Microsoft has recently become the world’s most valued company surpassing Apple. Let’s take a closer look at the differences in the hybrid cloud solutions offered by Amazon AWS and Microsoft Azure.

On-premise solutions

Certainly, a major attribute of the hybrid cloud is its integration with on-premise infrastructure. Here, Microsoft offers a clear hybrid cloud advantage as their roots are in on-premise enterprise software like Windows Server, SQL Server and Exchange. Over the past few years, Microsoft has been adding hybrid cloud integration features to most of its enterprise server products including Windows Server 2019, SQL Server 2017 and their StorSimple storage solution. Windows Server 2019 offers integration with Azure Active Directory, Azure Backup, Azure File Sync, Azure Site Recovery and the new Storage Migration Service.  In addition, Azure Hybrid Benefit for Windows Server allows you to use on-premise licenses to run Windows Server workloads in the cloud. SQL Server 2017 provides hybrid cloud backup as well as Availability Groups that can span from on-premise to the cloud. Their StorSimple storage solution can automatically archive inactive primary data from on-premises to the cloud as well as eliminating the need for separate backup processes by using cloud snapshots that provide off-site data protection. In addition, Microsoft’s Azure Stack can provide Azure-like operations for your private cloud infrastructure.

While AWS began purely as a cloud provider, they have recently begun moving toward embracing hybrid cloud solutions. In the past, you were able to backup Windows Server to AWS but the hybrid capabilities didn’t go much farther. Amazon’s recent partnership with VMware provides a very strong albeit VMware-specific hybrid cloud solution. The VMware Cloud on AWS platform enables businesses to extend their existing virtualized VMware stack to Amazon’s public cloud infrastructure. It is essentially a native cloud of VMware vSphere on Amazon AWS. Organizations can use the same software to manage VMware AWS in the cloud as they do their private on-site vSphere infrastructure freely moving workloads between on-premise and the cloud. This provides VMware access to Amazon’s global footprint and it gives Amazon a much needed hybrid cloud solution.

As an answer to Microsoft’s Azure Stack, Amazon has also offered their new AWS Outposts that brings native AWS services, infrastructure, and operating models to your on-premise data center or co-location facility. AWS Outposts uses AWS-designed hardware and software from either VMware or Amazon to provide cloud-type services locally. AWS Outposts come in two flavors: VMware Cloud on AWS Outposts allows you to use your existing VMware vSphere and management tools or an AWS native version that allows you to use the same Amazon APIs and control plane that you use for the AWS cloud. Like you would expect, AWS Outposts can connect to the AWS cloud.

Management Tools

Microsoft’s recently released Windows Admin Center provides both local and cloud management capabilities. Formerly known as Project Honolulu, the Windows Admin Center is a locally deployed, browser-based Windows Server management tool that consolidates local and remote server management under a single pane of glass. A free download, from Hello, Windows Admin Center!. Microsoft touts Windows Admin Center as the next evolution beyond Server Manager. It supports several points of integration with the Azure hybrid cloud including Azure Active Directory, Azure Backup and Azure Site Recovery.

Traditionally, AWS has offered no standalone on-perm management tools of their own. However, VMware Cloud on AWS does enable management of both local and cloud resources using the vSphere management tools. Likewise, the new AWS Outposts allow management of local resources using either the VMware Cloud on AWS Outposts or the AWS cloud management tools

The Silver Lining

Microsoft and Amazon have definitely approached the hybrid cloud from different places. Microsoft provides tighter built-in integration with their on-premise enterprise solutions to the Azure cloud. While Amazon has a head start in pure cloud services, their push into the hybrid cloud is fairly new. That said, the Amazon and VMware partnership with VMware Cloud on AWS provides a strong hybrid offering for organizations that primarily use VMware.

The post Hybrid Cloud: Microsoft Azure vs Amazon AWS appeared first on Petri.

Computers could soon run cold, no heat generated

The content below is taken from the original ( Computers could soon run cold, no heat generated), to continue reading please visit the site. Remember to respect the Author & Copyright.

It’s pretty much just simple energy loss that causes heat build-up in electronics. That ostensibly innocuous warming up, though, causes a two-fold problem:

Firstly, the loss of energy, manifested as heat, reduces the machine’s computational power — much of the purposefully created and needed, high-power energy disappears into thin air instead of crunching numbers. And secondly, as data center managers know, to add insult to injury, it costs money to cool all that waste heat.

For both of those reasons (and some others, such as ecologically related ones, and equipment longevity—the tech breaks down with temperature), there’s an increasing effort underway to build computers in such a way that heat is eliminated — completely. Transistors, superconductors, and chip design are three areas where major conceptual breakthroughs were announced in 2018. They’re significant developments, and consequently it might not be too long before we see the ultimate in efficiency: the cold-running computer.

To read this article in full, please click here

How to tell if Windows 10 license is OEM, Retail or Volume (MAK/KMS)

The content below is taken from the original ( How to tell if Windows 10 license is OEM, Retail or Volume (MAK/KMS)), to continue reading please visit the site. Remember to respect the Author & Copyright.

Windows 10 license is OEM, Retail or Volume

We all have heard about Windows Keys all the time. This Product Key activates Windows on your computer so you can use it without any limitation. There are multiple places from where you can buy a Windows key. It can […]

This post How to tell if Windows 10 license is OEM, Retail or Volume (MAK/KMS) is from TheWindowsClub.com.

How to patch Windows EC2 instances in private subnets Using AWS Systems Manager

The content below is taken from the original ( How to patch Windows EC2 instances in private subnets Using AWS Systems Manager), to continue reading please visit the site. Remember to respect the Author & Copyright.

Patching Windows instances in private subnets could be challenging since those Amazon EC2 instances have no internet connectivity. In this blog post we explain how to use AWS Systems Manager and Windows Server Update Services (WSUS) to keep those instances updated. We’ll create a new VPC with the proper endpoints, security groups, and network access control lists (ACLs), so the instances in the private subnet will not have connectivity to the internet, but at the same time they will stay up-to-date with software updates.

Setup

Creating VPC and endpoints

For this exercise we are going to create a new VPC using the Amazon Virtual Private Cloud (Amazon VPC) Launch VPC Wizard. However, before we create the VPC we need to allocate an Elastic IP (EIP) address for the NAT gateway that will be provisioned as part of the VPC creation.

Open the AWS Management Console, and navigate to the Amazon VPC console. On the Create VPC page, choose Elastic IPs on the left navigation pane and allocate a new EIP address.

Next, create the VPC by choosing the VPC with public and private subnets option. Use the default settings to create the VPC, and choose the Elastic IP allocation ID for the EIP that was previously reserved for the NAT gateway.

For more information, see the VPC creation procedure in the documentation. To create the VPC using AWS CloudFormation, see this VPC template.

After the VPC is created we need to add VPC endpoints for System Manager to the private subnet. VPC endpoints enable you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network. See the documentation for setting up VPC endpoints for Systems Manager. In the next steps we will create the following endpoints:

  • ssm – the endpoint for the Systems Manager APIs
  • ec2messages – the endpoint for the Run Command messaging service
  • ec2 – the endpoint used to enumerate attached Amazon EBS volumes
  • ssmmessages – the endpoint for the Session Manager messaging service
  • s3 – the end point for Amazon S3, used for logs and documents, and to update the Systems Manager agent

Use this endpoint for the AWS Systems Manager service: com.amazonaws.region.ssm.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Use this endpoint to make calls from SSM Agent to the Systems Manager service: com.amazonaws.region.ec2messages.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

If you’re using Systems Manager to create VSS-enabled snapshots, you need to ensure that you have an endpoint to the EC2 service. If you haven’t defined the EC2 endpoint, a call to enumerate attached EBS volumes fails, which causes the Systems Manager command to fail.

This is the endpoint to the EC2 service: com.amazonaws.region.ec2.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The next endpoint is required only if you are connecting to your instances through a secure data channel using Session Manager.

This is the endpoint to the EC2 service: com.amazonaws.region.ssmmessages.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Finally, create the Amazon S3 gateway endpoint on the VPC. Systems Manager uses this endpoint to upload Amazon S3 output logs, and to update SSM Agent.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Security – Creating security groups and network ACLs

Create a security group for the public subnet and another security group for the private subnet. The inbound rules for the public subnet should allow all traffic from the private subnet and TCP port 3389 for RDP, from a known IP address.

The outbound rules for the public subnet should allow traffic to all destinations.

The inbound rules for the private subnet should allow for TCP port 3389 traffic from and to the public subnet.

This allows the use of a “jump box/bastion host” in the public subnet to use remote desktop to connect to the Windows instances in the private subnet. This is only required if you want to use Remote Desktop to connect to your instances in the private subnet. Customers using AWS Session Manager can skip this step. For more information see AWS Session Manager. The Windows instance running WSUS in the public subnet could be used for this purpose. The outbound rules for the private subnet should allow all traffic to the public subnet.

For more information on connecting remotely to a Windows instance, see the documentation for connecting to a Windows instance and for authorizing inbound traffic for your Windows instances.

The SSM Agent running on the private subnet needs to connect to Amazon S3 to install the Windows PowerShell module that allows the Agent to scan and patch the instances. For the SSM Agent to be able to connect to Amazon S3, we need to add the S3 prefixes to the private subnet network ACL. The S3 prefixes are Region-specific. To get the S3 prefixes, for example for the us-west-2 AWS Region, we can run the following command from the AWS CLI:

aws ec2 describe-prefix-lists --region us-west-2

Among other things, the output from the CLI command will return the following information for S3:

           "PrefixListName": "com.amazonaws.us-west-2.s3",
            "Cidrs": [
                "54.231.160.0/19",
                "52.218.128.0/17",
                "52.92.32.0/22"
            ],

For more information on S3 prefixes, see the AWS CLI Reference documentation for describe-prefix-lists and the Amazon VPC documentation for gateway VPC endpoints.

Configure HTTPS (port 443) traffic to each of the S3 prefixes on the outbound rules of the private subnet. The outbound rules for the private subnet should look something like this:

 

 

 

 

 

 

 

 

 

Install the WSUS server in the public subnet

Install a Windows Server instance in the public subnet and assign to it the public subnet security group. Install and configure a WSUS role to periodically download patch data from Windows Update. Make sure you have at least 150 GB of hard drive space for the WSUS updates folder. For more information on installing and configuring WSUS, see the Microsoft documentation.

Install Windows instances in private subnets

Create Windows instances on the private subnets and assign the private subnet security group and the default VPC security group to the instances. It’s very important to assign the default VPC security group to the instances, so that the SSM Agent can communicate with the Systems Manager service. Configure the Windows Update Agent to use the WSUS server on the public subnet. This can be done by editing the registry on the instances or by using a group policy. You can also use State Manager and the new PowerShell DSC to configure the servers. See PowerShell DSC and DSC Registry Resource for more information. Also see the Microsoft documentation on configuring group policy settings for automatic updates and configuring automatic updates using registry settings.

Here is a registry configuration example:

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate]

"WUServer"="http://10.0.0.126:8530"

"WUStatusServer"="http://10.0.0.126:8530"

"AcceptTrustedPublishersCerts"=dword:00000001

"TargetGroups"="Servers"

"TargetGroupEnabled"=dword:00000000

"UpdateServiceUrlAlternate"=""


[HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate\AU]

"AUOptions"=dword:00000004

"NoAutoUpdate"=dword:00000000

"UseWUServer"=dword:00000001

"ScheduleInstallDay"=dword:00000000

Keep in mind that the latest WSUS uses HTTP port 8530 or HTTPS port 8531 instead of ports 80 and 443. Also, use the WSUS administration console to make sure that WSUS finished its synchronization with Windows Update before scanning or patching your instances.

Scanning and patching Windows instances in a private subnet

Verifying the SSM Agent is communicating with Systems Manager

The Windows instances in the private subnet should now be visible in Systems Manager. To verify this, open the Systems Manager console, and then navigate to the Managed instances page. The Window instances should be listed and their Ping status should be Online.

 

 

 

 

 

 

 

 

 

Run Command

To scan or patch the Windows instances we are going to use the AWS-RunPatchBaseline document under AWS Systems Manager Run Command. The AWS-RunPatchBaseline runs the default patch baseline for Windows. The AWS-DefaultPatchBaseline is the default Windows patch baseline. It includes all critical updates, and security updates that have a critical or important severity. WSUS is configured by default to auto approve the same patch categories and severities. If the default AWS patch baseline is not adequate for your environment, a new patch baseline can be created using AWS Systems Manager Patch Manager. For more information, see Default and Custom Patch Baselines. Make the new patch baseline the default baseline for Windows and configure WSUS auto approval to meet your needs, or use the Patch Group tag to target the new patch baseline to the appropriate instances.

To perform a scan, in the AWS Systems Manager console, on the Run Command page, choose the AWS-RunPatchBaseline document and select the Scan option under Operation. Specify your target, chose if you want to enable writing to S3 or not, and then run the command. This command will scan for the patches included in the AWS-DefaultPatchBaseline and report compliance. After the command successfully runs, check for the results of the scan under Compliance. You should see something similar to this:

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

To manually patch the Windows instances, run the AWS-RunPatchBaseline document again, but this time choose Install as the operation. This instructs your Windows instances to apply the patches that are missing from the configured patch baseline. The Windows instances go to the WSUS server to download the necessary patches, install them, and then reboot.

State Manager

To schedule patching to occur on a regular basis you can use the AWS Systems Manager Maintenance Windows features, or AWS Systems Manager State Manager. If you want complete control over when your instances are rebooted after being patched we recommend you use Maintenance Windows. If you’re less sensitive to reboots you can use State Manager and drive patching as a form of desired state configuration. In addition, we just launched a new, easy on-boarding experience in the console for patching (look for the Configure patching button in the Patch Manager section of the AWS Systems Manager console). Configure patching makes it easy for customers to set up automated patching.

State Manager can be used for hands-off patching. An association defines the state you want to apply to a set of targets. It includes three components: a document that defines the state, the targets, and a schedule.

An association can be created to, for example, run the AWS-RunPatchBaseline in scan or install mode on a daily basis.

In the AWS Systems Manager console, navigate to the Systems Manager page, and in the Create association section, search for and then select the AWS-RunPatchBaseline document:

When you choose the Scan option, AWS-RunPatchBaseline determines the patch compliance state of the instance and reports this information back to Patch Manager. Scan does not prompt updates to be installed or instances to be rebooted. Instead, the operation identifies where updates are missing that are approved and applicable to the instance.

 

 

 

 

 

 

When you choose the Install option, AWS-RunPatchBaseline attempts to install the approved and applicable updates that are missing from the instance.

 

 

 

 

 

 

Furthermore, you can specify a CRON style schedule to run your association.

Systems Manager is not just for patching

Since the SSM Agent is installed on Windows instances, you aren’t limited to just patching. You can take full advantage of Systems Manager and collect tag-based inventory from instances, use Run Command to execute other documents, and so on. See the following blog posts for more information on the many facets of AWS Systems Manager:

We used a private subnet on AWS for this exercise, but the same concept could be used to patch Windows on your private cloud. Systems Manager can manage servers and virtual machines in your hybrid environment. To learn more about AWS Systems Manager Activations and setting up AWS Systems Manager in a hybrid environment, see the following topics in the documentation:

About the Authors

Carlos Santiago is a Sr. Technical Account Manager with more than 23 years of experience in Windows systems management. He is passionate towards helping customers move from legacy management systems to the cloud.

 

 

 

Imtiaz (Taz) Sayed is a Principal Technical Account Manager and an engineer at heart. He loves working with customers and enabling them with solutions that accomplish more with less.

Let’s meet in Poland next week

The content below is taken from the original ( Let’s meet in Poland next week), to continue reading please visit the site. Remember to respect the Author & Copyright.

I’m heading back to Europe to run a pitch-off in Wroclaw and Warsaw, Poland. Are you ready?

The Wroclaw event, called In-Ference, is happening on December 17 and you can submit to pitch here. The team will notify you if you have been chosen. chosen to pitch. The winner will receive a table at TC Disrupt in San Francisco.

The Warsaw event, here, is on the 19th. You can sign up to pitch here. I’ll notify the folks I’ve chosen to pitch and the winner gets a table as well.

Special thanks to WeWork Labs in Warsaw for supplying some beer and pizza for the event and, as always, special thanks to Dermot Corr and Ahmad Piraiee for putting these things together. See you soon!

Puppet Bolt Agentless Automation for Linux and Windows Server

The content below is taken from the original ( Puppet Bolt Agentless Automation for Linux and Windows Server), to continue reading please visit the site. Remember to respect the Author & Copyright.


Puppet Bolt is an agentless and masterless remote task runner that you can use with your existing PowerShell, Python, and Bash scripts.

Over the last few months on Petri, I’ve been looking at using Puppet to automate Windows Server configuration. While PowerShell Desired State Configuration (DSC) provides similar functionality and is built in to Windows, Puppet is a more mature solution, is widely adopted in enterprise environments, and it is cross platform.

If you missed my articles on using Puppet to manage Windows Server, you can get the links to all seven parts here. There are also some additional articles on setting up open source Puppet and Puppet Enterprise on Red Hat Enterprise Linux here and here on Petri.

Introducing Puppet Bolt

Earlier this month, Puppet Labs announced the availability of Puppet Bolt 1.0, an open source, agentless, cross-platform configuration management solution that aims to make it easier to get started with automation. Puppet Bolt is a remote task runner and supports any language that your nodes can run. You don’t need to know Puppet to work with it. Bolt can run any existing management scripts that you have.

Masterless and Agentless

Unlike Puppet, Bolt uses WinRM (or SSH on Linux) to communicate directly with remote systems, doing away with the need to install agents on managed nodes. Puppet Bolt allows sysadmins to run existing scripts written using Bash, PowerShell, Python, and any language that your nodes can run; and use more than 5,000 modules in the Puppet Forge.

Puppet Bolt is also designed to be serverless (masterless). Puppet doesn’t necessarily require a master server, but it is designed to work best in a client/server architecture. Masterless Puppet also comes with challenges. Without a master server, you need to sync modules to each node, copy any other required files, and trigger a ‘puppet apply’. Plus, sensitive data can’t easily be restricted to exist just where it is needed.

Puppet Bolt brings declarative and imperative configuration management together (Image Credit: PuppetLabs)
Puppet Bolt brings declarative and imperative configuration management together (Image Credit: PuppetLabs) Puppet Bolt brings declarative and imperative configuration management together (Image Credit: PuppetLabs)

Puppet Bolt is designed to get you up and running quickly – no agents, no servers – and solve all the problems with masterless Puppet. Files describing the desired state for each resource on a node (catalogs) contain only the input required. Secrets can be pulled in as needed and modules are copied when catalogs are applied.

Tie Tasks Together with Bolt Plans

Bolt ‘plans’ are used to bring tasks together and are written in the Puppet plan language. Plans look much like Puppet manifests with some additional functionality for running tasks. Bolt can manage complex configurations using classes, parameters, functions, and automatically pull modules from Puppet Forge.

Imperative and Declarative Together

Puppet uses a declarative syntax language (Puppet DSL) for modeling your infrastructure. In other words, rather than specifying a list of technical steps that need to be completed, like in a script, you define what your infrastructure should look like rather than how it should be achieved. Puppet DSL is supported by Bolt Apply, allowing you to add Puppet code to your Bolt plans to enforce a state without a Puppet master server. But because Puppet Bolt is a remote task runner, it can run imperative commands for ad-hoc tasks, giving you the best of both declarative and imperative configuration worlds.

Baseline Configuration without an Agent

Bolt is an important step forward for Puppet because it brings it in line with other popular configuration management solutions, like Ansible and Chef, that don’t need an agent to set up a baseline configuration. Bolt can connect to PuppetDB and Orchestrator, and it is supported for organizations that have licenced Puppet Enterprise.

Watch this space for some hands-on articles on getting started with Puppet Bolt.

The post Puppet Bolt Agentless Automation for Linux and Windows Server appeared first on Petri.

Doom at 25: The FPS that wowed players, gummed up servers, and enraged admins

The content below is taken from the original ( Doom at 25: The FPS that wowed players, gummed up servers, and enraged admins), to continue reading please visit the site. Remember to respect the Author & Copyright.

Who cares? Let’s Who cares about that, let’s whip out the BFG and blow up the boss some of our coworkers

On December 10, 1993, after a marathon 30-hour coding session, the developers at id Software uploaded the first finished copy of Doom for download, the game that was to redefine first-person shooter (FPS) genre. gaming. Hours later IT admins wanted id’s their guts for garters.…

The first global drone standards have been revealed

The content below is taken from the original ( The first global drone standards have been revealed), to continue reading please visit the site. Remember to respect the Author & Copyright.

As drone use grows, rules and regulations remain in flux and vary among jurisdictions. Last month, for instance, the Federal Aviation Administration granted operators of certain drones approval to fly them in controlled airspace in the US, but the UK…

That night, a forest flew: DroneSeed is planting trees from the air

The content below is taken from the original ( That night, a forest flew: DroneSeed is planting trees from the air), to continue reading please visit the site. Remember to respect the Author & Copyright.

Wildfires are consuming our forests and grasslands faster than we can replace them. It’s a vicious cycle of destruction and inadequate restoration rooted, so to speak, in decades of neglect of the institutions and technologies needed to keep these environments healthy.

DroneSeed is a Seattle-based startup that aims to combat this growing problem with a modern toolkit that scales: drones, artificial intelligence intelligence, and biological engineering. And it’s even more complicated than it sounds.

Trees in decline

A bit of background first. The problem of disappearing forests is a complex one, but it boils down to a few major factors: climate change, outdated methods methods, and shrinking budgets (and as you can imagine, all three are related).

Forest fires are a natural occurrence, of course. And they’re necessary, as you’ve likely read, to sort of clear the deck for new growth to take hold. But climate change, monoculture growth, population increases, lack of control burns burns, and other factors have led to these events taking place not just more often, but more extensively and to more permanent effect.

On average, the U.S. is losing 7 million acres a year. That’s not easy to replace to begin with — and as budgets for the likes of national and state forest upkeep have shrunk continually over the last half century, there have been fewer and fewer resources with which to combat this trend.

The most effective and common reforestation technique for a recently burned woodland is human planters carrying sacks of seedlings and manually selecting and placing them across miles of landscapes. This back-breaking work is rarely done by anyone for more than a year or two, so labor is scarce and turnover is intense.

Even if the labor was available on tap, the trees might not be. Seedlings take time to grow in nurseries and a major wildfire might necessitate the purchase and planting of millions of new trees. It’s impossible for nurseries to anticipate this demand, and the risk associated with growing such numbers on speculation is more than many can afford. One missed guess could put the whole operation underwater.

Meanwhile, Meanwhile if nothing gets planted, invasive weeds move in with a vengeance, claiming huge areas that were once old growth forests. Lacking the labor and tree inventory to stem this possibility, forest keepers resort to a stopgap measure: use helicopters to drench the area in herbicides to kill weeds, then saturate it with fast-growing cheatgrass or the like. (The alternative to spraying is, again, the manual approach: machetes.)

At least then, in a year, instead of a weedy wasteland, you have a grassy monoculture — not a forest, but it’ll do until the forest gets here.

One final complication: helicopter spraying is a horrendously dangerous profession. These pilots are flying at sub-100-foot elevations, performing high-speed maneuvers so that their sprays reach the very edge of burn zones but they don’t crash head-on into the trees. This is an extremely dangerous occupation: 80 to 100 crashes occur every year in the U.S. alone.

In short, there are more and worse fires and we have fewer resources — and dated ones at that — with which to restore forests after them.

These are facts anyone in forest ecology and logging are familiar with, but perhaps not as well known among technologists. We do tend to stay in areas with cell coverage. But it turns out that a boost from the cloistered knowledge workers of the tech world — specifically those in the Emerald City — may be exactly what the industry and ecosystem require.

Simple idea, complex solution

So what’s the solution to all this? Automation, right?

Automation, especially via robotics, is proverbially suited for jobs that are “dull, dirty, and dangerous.” Restoring a forest is dirty and dangerous to be sure. But dull isn’t quite right. It turns out that the process requires far more intelligence than anyone was willing, it seems, to apply to the problem — with the exception of those planters. That’s changing.

Earlier this year, DroneSeed was awarded the first multi-craft, over-55-pounds unmanned aerial vehicle license ever issued by the FAA. Its custom UAV platforms, equipped with multispectral camera arrays, high-end lidar, six-gallon tanks of herbicide 6-gallon tanks of herbicide, and proprietary seed dispersal mechanisms have been hired by several major forest management companies, with government entities eyeing the service as well.

Ryan Warner/DroneSeed

These drones scout a burned area, mapping it down to as high as centimeter accuracy, including objects and plant species, fumigate it efficiently and autonomously, identify where trees would grow best, then deploy painstakingly designed seed-nutrient packages to those locations. It’s cheaper than people, less wasteful and dangerous than helicopters helicopters, and smart enough to scale to national forests currently at risk of permanent damage.

I met with the company’s team at their headquarters near Ballard, where complete and half-finished drones sat on top of their cases and the air was thick with capsaicin (we’ll get to that).

The idea for the company began when founder and CEO Grant Canary burned through a few sustainable startup ideas after his last company was acquired, and was told, in his despondency, that he might have to just go plant trees. Canary took his friend’s suggestion literally.

“I started looking into how it’s done today,” he told me. “It’s incredibly outdated. Even at the most sophisticated companies in the world, planters are superheroes that use bags and a shovel to plant trees. They’re being paid to move material over mountainous terrain and be a simple AI and determine where to plant trees where they will grow — microsites. We are now able to do both these functions with drones. This allows those same workers to address much larger areas faster without the caloric wear and tear.”

(Video: Ryan Warner/DroneSeed)

It may not surprise you to hear that investors are not especially hot on forest restoration (I joked that it was a “growth industry” but really because of the reasons above it’s in dire straits).

But investors are interested in automation, machine learning, drones drones, and especially government contracts. So the pitch took that form. With the money DroneSeed Droneseed secured, it has built its modestly sized but highly accomplished team and produced the prototype drones with which is has captured several significant contracts before even announcing that it exists.

“We definitely don’t fit the mold or metrics most startups are judged on. The nice thing about not fitting the mold is people double take and then get curious,” Canary said. “Once they see we can actually execute and have been with 3 of the 5 largest timber companies in the U.S. US for years, they get excited and really start advocating hard for us.”

The company went through Techstars, and Social Capital helped them get on their feet, with Spero Ventures joining up after the company got some groundwork done.

If things go as DroneSeed Droneseed hopes, these drones could be deployed all over the world by trained teams, allowing spraying and planting efforts in nurseries and natural forests to take place exponentially faster and more efficiently than they are today. It’s genuine change-the-world-from-your-garage stuff, which is why this article is so long.

Hunter (weed) killers

The job at hand isn’t simple or even straightforward. Every landscape differs from every other, not just in the shape and size of the area to be treated but the ecology, native species, soil type and acidity, type of fire or logging that cleared it it, and so on. So the first and most important task is to gather information.

For this, DroneSeed this Droneseed has a special craft equipped with a sophisticated imaging stack. This first pass is done using waypoints set on satellite imagery.

The information collected at this point is really far more detailed than what’s actually needed. The lidar, for instance, collects spatial information at a resolution much beyond what’s needed to understand the shape of the terrain and major obstacles. It produces a 3D map of the vegetation as well as the terrain, allowing the system to identify stumps, roots, bushes, new trees, erosion erosion, and other important features.

This works hand in hand with the multispectral camera, which collects imagery not just in the visible bands — useful for identifying things — but also in those outside the human range, which allows for in-depth analysis of the soil and plant life.

The resulting map of the area is not just useful for drone navigation, but for the surgical strikes that are necessary to make this kind of drone-based operation worth doing in the first place. No doubt there are researchers who would love to have this data as well.

Ryan Warner/DroneSeed

Now, spraying and planting are very different tasks. The first tends to be done indiscriminately using helicopters, and the second by laborers who burn out after a couple of years — as mentioned above, it’s incredibly difficult work. The challenge in the first case is to improve efficiency and efficacy, while in the second case is to automate something that requires considerable intelligence.

Spraying is in many ways simpler. Identifying invasive plants isn’t easy, exactly, but it can be done with imagery like that the drones are collecting. Having identified patches of a plant to be eliminated, the drones can calculate a path and expend only as much herbicide is necessary to kill them, instead of dumping hundreds of gallons indiscriminately on the entire area. It’s cheaper and more environmentally friendly. Naturally, the opposite approach could be used for distributing fertilizer or some other agent.

I’m making it sound easy again. This isn’t a plug and play situation — you can’t buy a DJI drone and hit the “weedkiller” option in its control software. A big part of this operation was the creation not only of the drones themselves, but the infrastructure with which to deploy them.

Conservation convoy

The drones themselves are unique, but not alarmingly so. They’re heavy-duty craft, capable of lifting well over the 57 pounds of payload they carry (the FAA limits them to 115 pounds).

“We buy and gut aircraft, then retrofit them,” Canary explained simply. Their head of hardware, would probably like to think there’s a bit more to it than that, but really the problem they’re solving isn’t “make a drone” but “make drones plant trees.” To that end, Canary explained, “the most unique engineering challenge was building a planting module for the drone that functions with the software.” We’ll get to that later.

DroneSeed deploys drones in swarms, which means as many as five drones in the air at once — which in turn means they need two trucks and trailers with their boxes, power supplies, ground stations stations, and so on. The company’s VP of operations comes from a military background where managing multiple aircraft onsite was part of the job, and she’s brought her rigorous command of multi-aircraft environments to the company.

Ryan Warner/DroneSeed

The drones take off and fly autonomously, but always under direct observation by the crew. If anything goes wrong, they’re there to take over, though of course there are plenty of autonomous behaviors for what to do in case of, say, a lost positioning signal or bird strike.

They fly in patterns calculated ahead of time to be the most efficient, spraying at problem areas when they’re over them, and returning to the ground stations to have power supplies swapped out before returning to the pattern. It’s key to get this process down pat, since efficiency is a major selling point. If a helicopter does it in a day, why shouldn’t a drone swarm? It would be sad if they had to truck the craft back to a hangar and recharge them every hour or two. It also increases logistics costs like gas and lodging if it takes more time and driving.

This means the team involves several people, people as well as several drones. Qualified pilots and observers are needed, as well as people familiar with the hardware and software that can maintain and troubleshoot on site — usually with no cell signal or other support. Like many other forms of automation, this one brings its own new job opportunities to the table.

AI plays Mother Nature

The actual planting process is deceptively complex.

The idea of loading up a drone with seeds and setting it free on a blasted landscape is easy enough to picture. Hell, it’s been done. There are efforts going back decades to essentially load seeds or seedlings into guns and fire them out into the landscape at speeds high enough to bury them in the dirt: in theory this combines the benefits of manual planting with the scale of carpeting the place with seeds.

But whether it was slapdash placement or the shock of being fired out of a seed gun, this approach never seemed to work.

Forestry researchers have shown the effectiveness of finding the right “microsite” for a seed or seedling; in fact, it’s why manual planting works as well as it does. Trained humans find perfect spots to put seedlings: in the lee of a log; near but not too near the edge of a stream; on the flattest part of a slope, and so on. If you really want a forest to grow, you need optimal placement, perfect conditions conditions, and preventative surgical strikes with pesticides.

Ryan Warner/DroneSeed

Although it’s difficult,

Although it’s difficult it’s also the kind of thing that a machine learning model can become good at. Sorting through messy, complex imagery and finding local minima and maxima is a specialty of today’s ML systems, and the aerial imagery from the drones is rich in relevant data.

The company’s CTO led the creation of an ML model that determines the best locations to put trees at a site — though this task can be highly variable depending on the needs of the forest. A logging company might want a tree every couple of feet, feet even if that means putting them in sub-optimal conditions — but a few inches to the left or right may make all the difference. On the other hand, national forests may want more sparse deployments or specific species in certain locations to curb erosion or establish sustainable firebreaks.

Once the data has been crunched, the map is loaded into the drones’ hive mind and the convoy goes to the location, where the craft are loaded up with seeds instead of herbicides.

But not just any old seeds! You see, that’s one more wrinkle. If you just throw a sagebrush seed on the ground, even if it’s in the best spot in the world, it could easily be snatched up by an animal, roll or wash down to a nearby crevasse, or simply fail to find the right nutrients in time despite the planter’s best efforts.

That’s why DroneSeed’s head Head of Planting and his team have been working on a proprietary seed packet that they were unbelievably reticent to detail.

From what I could gather, they’ve put a ton of work into packaging the seeds into nutrient-packed little pucks held together with a biodegradable fiber. The outside is dusted with capsaicin, the chemical that makes spicy food spicy (and also what makes bear spray do what it does). If they hadn’t told me, I might have guessed, since the workshop area was hazy with it, leading us all to cough and tear up a little. If I were a marmot, I’d learn to avoid these things real fast.

The pucks, or “seed vessels,” can and must be customized for the location and purpose — you have to match the content and acidity of the soil, things like that. DroneSeed will have to make millions of these things, but it doesn’t plan to be the manufacturer.

Finally these pucks are loaded in a special puck-dispenser which, closely coordinating with the drone, spits one out at the exact moment and speed needed to put it within a few centimeters of the microsite.

All these factors should improve the survival rate of seedlings substantially. That means that the company’s methods will not only be more efficient, but more effective. Reforestation is a numbers game played at scale, and even slight improvements — and DroneSeed is promising more than that — are measured in square miles and millions of tons of biomass.

Proof of life

DroneSeed has already signed several big contracts for spraying, and planting is next. Unfortunately, Unfortunately the timing on their side meant they missed this year’s planting season, though by doing a few small sites and showing off the results, they’ll be in pole position for next year.

After demonstrating the effectiveness of the planting technique, the company expects to expand its business substantially. That’s the scaling part — again, not easy, but easier than hiring another couple thousand planters every year.

Ryan Warner/DroneSeed

Ideally the hardware can be assigned to local teams that do the on-site work, producing loci of activity around major forests from which jobs can be deployed at large or small scales. A set of five or six 5 or 6 drones does the work of one a helicopter, roughly speaking, so depending on the volume requested by a company or forestry organization, organization you may need dozens on demand.

That’s all yet to be explored, but DroneSeed is confident that the industry will see the writing on the wall when it comes to the old methods, and identify them as a solution that fits the future.

If it sounds like I’m cheerleading for this company, that’s because I am. It’s not often in the world of tech startups that you find a group of people not just attempting to solve a serious problem — it’s common enough to find companies hitting this or that issue — but who have spent the time, gathered the expertise expertise, and really done the dirty, boots-on-the-ground work that needs to happen so it goes from great idea to real company.

That’s what I felt was the case with DroneSeed, and here’s hoping their work pays off — for their sake, sure, but mainly for ours.

E Ink debuts a new electronic drawing technology

The content below is taken from the original ( E Ink debuts a new electronic drawing technology), to continue reading please visit the site. Remember to respect the Author & Copyright.

E Ink — a name synonymous with e-reader screens — just debuted a new writing display Display technology called JustWrite. The tech offers the company’s familiar monochrome aesthetic — albeit in negative this time, with white on black.

The key here, as with most of E Ink’s technology, technology is minimal power consumption and low cost, the latter of which it was able to accomplish by dumping the TFT (thin-film-transistor LCD). Instead, it’s a thin roll that could be used to paper surfaces like conference rooms and schools, in order to let people write on the walls using a stylus with practically no latency, as evidenced in the below GIF. 

“The JustWrite film features one of E Ink’s proprietary electronic inks and offers similar benefits as E Ink’s other product lines: a paper-like experience with a good contrast and reflective display without a backlight,” the company writes. “The JustWrite film is an all plastic display, making it extremely durable and lightweight, with the ability to be affixed and removed easily, enabling writing surfaces in a variety of locations.”

The technology could go head to head with the likes of Sony and reMarkable on drawing tablets, but E Ink appears to be more interested in embedding embedded it in non-traditional surfaces. No word yet on how or when it will come to market, though the company is showing it off in person for the first time this week at an event in Tokyo.

Stove Alarm Keeps The Kitchen Safe

The content below is taken from the original ( Stove Alarm Keeps The Kitchen Safe), to continue reading please visit the site. Remember to respect the Author & Copyright.

Gas cooktops have several benefits, being able to deliver heat near-instantly, while also being highly responsive when changing temperature. However, there are risks involved with both open flames and the potential of leaving the gas on with the burner unlit. After a couple of close calls, [Bob] developed a simple solution to this safety issue.

The round PCB sits neatly behind the knobs, affixed with double-sided tape.

Most commercial products in this space work by detecting the heat from the cooktop, however this does not help in the case of an unlit burner being left on. [Bob]’s solution was to develop a small round PCB that sits behind the oven knobs. Magnets are placed on the knobs, which hold a reed switch open when the knob is in the off position. When the knob is turned on, the reed switch closes, powering a small microcontroller which beeps at regular intervals to indicate the burner is on.

It’s a tidy solution to a common problem, which could help many people – especially the elderly or the forgetful. It integrates neatly into existing cooktops without requiring major modification, and [Bob] has made the plans available if you wish to roll your own.

On the other end of the scale, you might want an alarm on your freezer, too.

Salesforce wants to deliver more automated field service using IoT data

The content below is taken from the original ( Salesforce wants to deliver more automated field service using IoT data), to continue reading please visit the site. Remember to respect the Author & Copyright.

Salesforce has been talking about the Internet of Things for some time as a way to empower field service workers. Today, the company announced Field Service Lightning, a new component designed to deliver automated IoT data to service technicians in the field on their mobile devices.

Once you connect sensors in the field to Service Cloud, you can make this information available in an automated fashion to human customer service agents and pull in other data about the customer from Salesforce’s CRM system to give the CSR a more complete picture of the customer.

“Drawing on IoT signals surfaced in the Service Cloud console, agents can gauge whether device failure is imminent, quickly determine the source of the problem (often before the customer is even aware a problem exists) and dispatch the right mobile worker with the right skill set,” Salesforce’s SVP and GM for Salesforce Field Service Lightning Paolo Bergamo wrote in a blog post introducing the new feature.

The field service industry has been talking for years about using IoT data from the field to deliver more proactive service and automate the customer service and repair process. That’s precisely what this new feature is designed to do. Let’s say you have a “smart home” with a heating and cooling system that can transmit data to the company that installed your equipment. With a system like this in place, the sensors could tell your HVAC dealer that a part is ready to break down and automatically start a repair process (that would presumably include calling the customer to tell them about it). When a CSR determines a repair visit is required, the repair technician would receive all the details on their smart phone.

Customer Service Console view. Gif: SalesforceIt SalesforceIt also could provide a smoother experience because the repair technician can prepare before he or she leaves for the visit with the right equipment and parts for the job and a better understanding of what needs to be done before arriving at the customer location. This should theoretically lead to more efficient service calls.

All of this is in line with a vision the field service industry has been talking about for some time that you could sell a subscription to a device like an air conditioning system instead of the device itself. This would mean that the dealer would be responsible for keeping it up and running and having access to data like this could help that vision to become closer to reality.

In reality, most companies are probably not ready to implement a system like this and most equipment in the field has not been fit fitted with sensors to deliver this information to the Service Cloud. Still, companies like Salesforce, ServiceNow and ServiceMax (owned by GE) want to release products like this for early adopters and to have something in place as more companies look to put smarter systems in place in the field.

IoT in Action: 4 innovations that are revolutionizing IoT

The content below is taken from the original ( IoT in Action: 4 innovations that are revolutionizing IoT), to continue reading please visit the site. Remember to respect the Author & Copyright.

The Internet of Things (IoT) is reshaping every industry from manufacturing to medicine, and opportunities to transform business are nearly limitless. And while IoT is a complicated endeavor requiring multiple partners, skillsets, and technologies, new innovations are making projects easier to deploy, more secure, and more intelligent than ever.

Below I’ve called out four innovations that are revolutionizing the IoT industry. To learn more about how to take advantage of these innovations, be sure to register for our upcoming IoT in Action Virtual Bootcamp.

IoT in Action virtual event details

1. Artificial intelligence (AI) and cognitive capabilities

Cognitive services and AI used to come with a high price tag. But times have changed, and these capabilities are becoming increasingly accessible.

IoT Hub and Cognitive Services enable you to tailor IoT solutions with advanced intelligence without a team of data scientists. Not only do AI and Cognitive Services make it easier to infuse IoT solutions with capabilities such as image recognition, speech analytics, and intelligent recommendations, but they also help companies act on the data being gathered and realize the true value of IoT. Scenarios are virtually limitless. Companies like UBER are using visual identity verification to increase platform security, and Spektacom is making cricket better with its AI-infused sticker for cricket bats that can deliver insights around batting style.

2. Real-time analytics at the intelligent edge

You need data analytics to make your IoT solution complete, but all the data you need is not where you want it to be—it’s at the edge. One solution is to reproduce a cloud environment locally, but this can be costly and you may end up having to support two solutions, not one.

Now you can extend cloud intelligence and analytics to the edge. Azure IoT Edge optimizes performance between the edge and cloud, reducing latency, so you get real-time data. This secure solution enables edge devices to operate reliably even when they have intermittent cloud connectivity, while also ensuring that only the data you need gets sent to the cloud. And by combining data from the cloud and data from the edge, you get the best of both worlds.

3. More secure IoT devices

IoT security continues to evolve. Which means it’s never been easier to lock down your IoT solutions. At Microsoft, we continue to build uncompromising security into every product we make. We recently released Azure Sphere, which is an end-to-end solution for creating highly-secure, connected devices using a new class of microcontrollers (MCUs). Azure Sphere powers edge devices, combining three key components including Azure Sphere certified MCUs, Azure Sphere OS, and the Azure Sphere Security Service.

4. Provisioning IoT quickly at scale

Provisioning IoT manually is time-intensive and can quickly become a showstopper, especially when you’ve got hundreds, thousands, or even millions of devices to configure. Even if manual provisioning is possible now, building in the capability to quickly and securely provision future devices is critical.

Azure IoT Hub features a Device Provisioning Service (DPS) that enables remote provisioning without human intervention. Azure DPS provides the infrastructure needed to provision millions of devices in a secure and scalable way. DPS extends trust from the silicon to the cloud where it creates registries to enable managed identity services including location, mapping, aging, and retirement. It works in a variety of scenarios from automatic configuration based on solution-specific needs to load balancing across multiple hubs to connecting devices based on geo-location.

Register for the IoT in Action Virtual Bootcamp

To learn more about how you can take advantage of these innovations, be sure to register for an IoT in Action Virtual Bootcamp. Whether you are an engineer, software architect, or practice owner, this virtual bootcamp will give you a clear understanding of IoT from device to cloud and accelerate the development of an IoT solution for your business.

This event will help you get hands on with the latest in IoT devices and cloud services including secure MCUs, IoT OSes, and advanced application services. You will also receive trusted guidance and a singular ecosystem view, supporting you in the design of secure IoT solutions that add real-world business value and create exciting new customer experiences. Join us to establish a leadership position in the IoT ecosystem by creating new experiences and revenue streams while optimizing bottom-line performance.

Register for an IoT in Action Virtual Bootcamp in your time zone:

Interested in attending one of our in-person IoT in Action event? Register for a free event coming to a city near you.

Lime will take on London’s Boris Bikes with e-bike launch

The content below is taken from the original ( Lime will take on London’s Boris Bikes with e-bike launch), to continue reading please visit the site. Remember to respect the Author & Copyright.

US dockless e-scooter and e-bike service Lime is bringing its electric-assisted bicycles to London, following their launch in Milton Keynes just over a week earlier. A fleet of 1,000 bright green e-bikes — equipped with a 250-watt motor boasting a m…

New ProfileUnity and FlexApp v6.8 from Liquidware Delivers on Promise to Host Application Layers in the Cloud Natively

The content below is taken from the original ( New ProfileUnity and FlexApp v6.8 from Liquidware Delivers on Promise to Host Application Layers in the Cloud Natively), to continue reading please visit the site. Remember to respect the Author & Copyright.

Liquidware , the leader in adaptive workspace management, today announced the release of Liquidware ProfileUnity and FlexApp v6.8 with new… Read more at VMblog.com.

Connect Your Electric Heater To The Internet (Easily and Cheaply)!

The content below is taken from the original ( Connect Your Electric Heater To The Internet (Easily and Cheaply)!), to continue reading please visit the site. Remember to respect the Author & Copyright.

Winter has arrived, and by now most households should have moved on from incandescent bulbs, so we can’t heat ourselves that way. Avoiding the chill led [edent] to invest in an electric blanket. This isn’t any ordinary electric blanket — no, this is one connected to the Internet, powered by Alexa.

This is a project for [edent] and his wife, which complicates matters slightly due to the need for dual heating zones. Yes, dual-zone electric heating blankets exist (as do two electric blankets and sewing machines), but the real problem was finding a blanket that turned on when it was plugged in. Who would have thought a simple resistive heating element could be so complicated?

For the Internet-facing side of this project, [edent] is using a Meross smart plug and a Sonoff S20 smart plug. These are set up through to work with Alexa and configured as an ‘electric blanket’ group. Simply saying, “Alexa, switch on the electric blanket” turns on the bed.

There are a few problems in need of future improvement. Alexa doesn’t recognize voices, so saying ‘Turn on my side of the bed’ doesn’t work. The blanket also shuts off after an hour, but the plug sockets stay live. There’s also the possibility that hackers could break into this Alexa and burn down the house, but this is a device on the Internet; that sort of stuff virtually never happens.

You can check out the demo of the electric bed below.

Amazon Thinks ARM is Bigger than your Phone

The content below is taken from the original ( Amazon Thinks ARM is Bigger than your Phone), to continue reading please visit the site. Remember to respect the Author & Copyright.

As far as computer architectures go, ARM doesn’t have anything to be ashamed of. Since nearly every mobile device on the planet is powered by some member of the reduced instruction set computer (RISC) family, there’s an excellent chance these words are currently making their way to your eyes courtesy of an ARM chip. A userbase of several billion is certainly nothing to sneeze at, and that’s before we even take into account the myriad of other devices which ARM processors find their way into: from kid’s toys to smart TVs.

ARM is also the de facto architecture for the single-board computers which have dominated the hacking and making scene for the last several years. Raspberry Pi, BeagleBone, ODROID, Tinker Board, etc. If it’s a small computer that runs Linux or Android, it will almost certainly be powered by some ARM variant; another market all but completely dominated.

It would be a fair to say that small devices, from set top boxes down to smartwatches, are today the domain of ARM processors. But if we’re talking about what one might consider “traditional” computers, such as desktops, laptops, or servers, ARM is essentially a non-starter. There are a handful of ARM Chromebooks on the market, but effectively everything else is running on x86 processors built by Intel or AMD. You can’t walk into a store and purchase an ARM desktop, and beyond the hackers who are using Raspberry Pis to host their personal sites, ARM servers are an exceptional rarity.

Or at least, they were until very recently. At the re:Invent 2018 conference, Amazon announced the immediate availability of their own internally developed ARM servers for their Amazon Web Services (AWS) customers. For many developers this will be the first time they’ve written code for a non-x86 processor, and while some growing pains are to be expected, the lower cost of the ARM instances compared to the standard x86 options seems likely to drive adoption. Will this be the push ARM needs to finally break into the server and potentially even desktop markets? Let’s take a look at what ARM is up against.

A Double Edged Sword

At the risk of oversimplifying the situation, ARM has become the go-to for small devices due to the inherent efficiency of the architecture. ARM chips consume much less energy, and in turn don’t get nearly as hot, as their x86 peers. This is a perfect combination for small, battery-powered devices as the increased energy efficiency not only allows for longer run times, but means the processor usually doesn’t need anything more than a passive heat spreader to keep cool (if even that).

But the efficiency of ARM processors isn’t as compelling in the performance driven world of desktop and server computing. In these applications, energy consumption has generally not been a deciding factor when selecting hardware. Modern desktop processors can consume nearly 100 watts at load, even higher for server chips. Nobody denies this is an enormous amount of power to consume, but it’s largely seen as a necessary evil.

That being said, one might wonder why ARM laptops haven’t become popular at this point. Unfortunately there’s another factor at work: software compiled for x86 won’t run on ARM hardware. Under Linux this isn’t much of a problem; there are several distributions which have been adapted and recompiled for ARM processors, thanks in no small part to the popularity of devices like the Raspberry Pi. But under Windows, the situation is very different. While Microsoft introduced ARM-compatible versions of Windows a few years back, it doesn’t change the fact that the decades of Windows software that’s already on the market can’t be used without resorting to emulation.

The AWS Opportunity

Issues with legacy software support may be keeping ARM out of the general purpose computing market, but it wasn’t a problem for mobile operating systems like Android or iOS which launched with ARM processors and had no back catalog of software to worry about. This software “clean slate” might not be possible within the Windows-dominated desktop and laptop markets, but that’s not the case for AWS customers.

AWS Operating System Usage

For developers who have been using Linux AWS instances (which the vast majority are), the ARM environment will not be significantly different from what they’re used to. In fact, if their software is written in an interpreted language like Python they should be able to move their existing code over to the ARM servers as-is. If it’s written in C or another compiled language it will need to rebuilt from source due to the architecture change, but in most cases will require little to no modification.

In short, moving AWS customers from x86 to ARM won’t cause the same “culture shock” as it would on other platforms. In exchange for lower operating costs, customers will likely be willing to make whatever minimal changes may be required when moving from x86 Linux to ARM Linux.

Of course, from Amazon’s perspective getting more customers onto ARM servers means reduced energy consumption in their data centers. It’s a win-win for everyone involved, and provides a fantastic opportunity to show ARM has earned its place in the server market.

Amazon’s Own Hardware

While there are already a few ARM servers on the market, Amazon decided to go with their own in-house custom silicon for their AWS implementation. Called Graviton, the system is the end result of Amazon’s purchase of Israeli chip manufacturer Annapurna Labs in 2015. It utilizes the now several year old Cortex-A72 microarchitecture, and features 16 cores clocked at 2.3GHz.

Early benchmarks for Graviton indicate that performance is about on par with Intel’s low-end server processors, and that it specifically struggles with single-threaded tasks. This isn’t terribly surprising considering the age of the Cortex-A72 and the fact that Amazon is careful not to promise there’s actually any performance gains to be had when switching to ARM. Of course, benchmarks can be misleading and we’ll need to wait until more developers get experience with Graviton to see what its real-world performance is like.

It should also be said that the first generation of Graviton is really just to test the waters. Subsequent generations of the hardware, with a potential upgrade to a more modern ARM microarchitecture, would have a huge impact on performance. For example the Cortex-A76 is designed for laptop applications and offers over twice the performance of the older A72.

One Small Step

With Amazon’s introduction of ARM servers, a whole new generation of developers are going to get first hand experience with a non-x86 architecture in arguably the most seamless way possible. Nuances will be learned, limits will be pushed, and shortcomings will be overcome. In time these brand-new ARM developers may want to get a laptop or even desktop machine that has the same architecture as the platform they’re deploying their code on. With any luck, that’s when the free market will kick in and start offering them at competitive prices.

Or not. It could be that the hit in performance when switching to ARM isn’t enough to make up for the price reduction, and AWS customers don’t bite. No matter which way it goes, it’s going to be interesting to watch. The results of Amazon’s ARM experiment could end up determining if the ubiquitous chip that powers our myriad of gadgets stays firmly in our pockets or ascends into the cloud.

Can a commute be beautiful? These colorful rendered maps show us they can

The content below is taken from the original ( Can a commute be beautiful? These colorful rendered maps show us they can), to continue reading please visit the site. Remember to respect the Author & Copyright.

Everyone can relate to daily commutes. Whether it’s fifteen minutes or an hour, infrastructures in various cities dictate how transportation affects our daily lives. Through the use of data visualization, Craig Taylor, Data Visualization Design Manager at Ito World uses color and form to portray commute distances in an artistically beautiful way.

Coral Cities: European Cities © Craig Taylor

A project that depicts city infrastructure in a whole new light, Taylor blends art, urban planning, and science together to create beautifully rendered images of street networks in 40 major cities. The project appropriately called, Coral Cities, showcases how far one can travel by car 30 minutes from the center of major cities across the globe. Growing from the inside out, the visual depiction of city infrastructures resembles the form of growing coral. 

Early idea of plinth renders © Craig Taylor

Depending on the geological features of the city, each “Coral City” is unique to its region. According…

CloudJumper Announces Cloud Workspace Management Suite for VMware Cloud on AWS

The content below is taken from the original ( CloudJumper Announces Cloud Workspace Management Suite for VMware Cloud on AWS), to continue reading please visit the site. Remember to respect the Author & Copyright.

CloudJumper , a leading provider of Virtual Desktop Infrastructure (VDI), Workspace as a Service (WaaS) and Desktop as a Service (DaaS) solutions,… Read more at VMblog.com.

What You Need to Know About Cloud Backup and Disaster Recovery

The content below is taken from the original ( What You Need to Know About Cloud Backup and Disaster Recovery), to continue reading please visit the site. Remember to respect the Author & Copyright.


Backup is the foundation of every business’ disaster recovery (DR) strategy and today many organizations are choosing to back up to the cloud. There are a lot of good reasons for using the cloud as a backup target.

The cloud has global data access that can service a geographically dispersed business. Cloud storage tends to be less expensive than local storage – and this can be an important consideration as most businesses are experiencing very rapid data growth. Cloud storage can also fulfill the requirement for offsite storage which can be used to recover your systems in the event of a site failure.

While cloud backup has a number of advantages, there are also several considerations that you to be aware of before moving your backups to the cloud.

First, backing up to the cloud means that you are no longer in control of the backup media. Instead, the control over your backups is in the hands of your cloud provider. Next, cloud backups have the potential to increase your backup window. Backing up to the cloud means the backup data has to be transferred across the Internet which means that there can be both security and latency issues. The backup data itself needs to be encrypted as it is transferred to the cloud to prevent any unauthorized access. Backup and restore times to the cloud are also impacted by network latency which can extend your Recovery Time Objectives (RTOs).

To offset the network latency issue, some backup products offer features like data compression, deduplication and WAN acceleration that can vastly reduce the amount of data transferred across the Internet and speed up the backup process. One way many businesses have dealt with this problem is by having staged backups where an initial backup happens locally and then a later process copies that backup to the cloud.

This strategy offers several advantages as it facilitates fast backup and restore from local storage as well as providing an offsite copy of the data for DR. It can also insulate your application backup from the latency that might be incurred with backing up directly to the cloud.

Cloud disaster recovery (DR) has similar considerations. DR in the cloud and Disaster Recovery-as a-Service (DRaaS) can be especially useful for many smaller and medium-sized business (SMB) that otherwise might not be able to implement a DR plan.

Making a DR plan can be resource intensive and complicated which can put it out of reach for SMBs that are struggling to keep up with day-to-day demands. Utilizing a DRaaS can take away a lot of the heavy lifting required to implement a DR strategy bringing it within reach of many SMBs. However, one thing often initially surprises many organizations about most DRaaS services is that the recovery target is usually the cloud.

That means that after a DR failover has happened all of the business critical VMs and workloads will be restored but they will be restored into the cloud which can change the application’s characteristics and Service Level Agreements (SLAs). Certain DRaaS features can make it easier for you to evaluate the impact of failover to the cloud. The ability to perform non-impactful full site failovers as well as selective partial failovers and failbacks can allow you to ascertain your RTOs and Recovery Point Objectives (RPOs) as well as evaluate the performance of your applications following a failover.

Like cloud backup, network latency can affect the replication interval that you can support which can impact your RTOs and RPOs. This makes networking technologies like data compression and WAN acceleration important. The ability to orchestrate your recovery process is also vital because most applications have built-in dependencies. For instance, your domain controllers and DNS services need to be brought online before the applications and databases that require them. Orchestration enables you to control the recovery order of your VMs and services.  Like cloud backups, the ability to provide end-to-end encryption is also important in order to secure your private and sensitive information.

Like the old saying the devil is in the details, the cloud can be a great asset for backups and DR but before jumping in its important to be aware of the details that you need to know for a successful implementation.

The post What You Need to Know About Cloud Backup and Disaster Recovery appeared first on Petri.