The first text message was sent 25 years ago

The content below is taken from the original ( The first text message was sent 25 years ago), to continue reading please visit the site. Remember to respect the Author & Copyright.

Be prepared to feel ancient — the first text message is 25 years old. Engineer Neil Papworth sent the first SMS on December 3rd, 1992, when he wrote "merry Christmas" on a computer and sent it to the cellphone of Vodafone director Richard Jarvis. It…

Exploring the BBC Micro:Bit Software Stack

The content below is taken from the original ( Exploring the BBC Micro:Bit Software Stack), to continue reading please visit the site. Remember to respect the Author & Copyright.

The BBC micro:bit has been with us for about eighteen months now, and while the little ARM-based board has made a name for itself in its intended market of education, we haven’t seen as much of it in our community as we might have expected.

If you or a youngster in your life have a micro:bit, you may have created code for it using one of the several web-based IDEs, a graphical programming system, TypeScript, or MicroPython. But these high level languages are only part of the board’s software stack, as [Matt Warren] shows us with his detailed examination of its various layers.

The top layer of the micro:bit sandwich is of course your code. This is turned into a hex file by the web-based IDE’s compiler, which you then place on your device. Interestingly only the Microsoft TypeScript IDE compiles the TypeScript into native code, while the others bundle your code up with an interpreter.

Below that is the micro:bit’s hardware abstraction layer, and below that in turn is ARM’s Mbed OS layer, because the micro:bit is at heart simply another Mbed board. [Matt] goes into some detail about how the device’s memory map accommodates all these components, something essential given that there is only a paltry 16 kB of RAM in hand.

You might wish to program a micro:bit somewhat closer to the metal with the Mbed toolchain, but even if that is the case it’s still of interest to read a dissection of its official stack. Meanwhile, have a look at our review of the board, from summer 2016.

Filed under: Software Development

Classic Furby plus Alexa Equals… Furlexa

The content below is taken from the original ( Classic Furby plus Alexa Equals… Furlexa), to continue reading please visit the site. Remember to respect the Author & Copyright.

[Zach Levine] wrote in to share a project just completed: a classic Furby packing a Raspberry Pi running Alexa: he calls it Furlexa.

The original Furby product wowed consumers of the 90s. In addition to animatronic movements, it also packed simulated voice learning technology that seemed to allow the Furby to learn to speak. It wasn’t like anything else on the market, and even got the toy banned from NSA’s facilities in case it could spy on them. Elegantly, the robot uses only one motor to move all of its parts, using a variety of plastic gears, levers, and cams to control all of the robot’s body parts and to make it dance.

Over the past twenty years the Furby has earned the reputation as one of the most hackable toys ever — despite its mystery microcontroller, which was sealed in plastic to keep the manufacturer’s IP secret. [Zach] replaced the control board with a Pi Zero. He also replaced the crappy mic and pizeo speaker that came with toy with a Pimoroni Speaker pHat and a better mic.

While classic Furbys have a reputation for hackability, the new ones aren’t immune: this Infiltrating Furby is based on a recent model of the toy.

 

Filed under: Toy Hacks

Announcing Alexa for Business: Using Amazon Alexa’s Voice Enabled Devices for Workplaces

The content below is taken from the original ( Announcing Alexa for Business: Using Amazon Alexa’s Voice Enabled Devices for Workplaces), to continue reading please visit the site. Remember to respect the Author & Copyright.

There are only a few things more integrated into my day-to-day life than Alexa. I use my Echo device and the enabled Alexa Skills for turning on lights in my home, checking video from my Echo Show to see who is ringing my doorbell, keeping track of my extensive to-do list on a weekly basis, playing music, and lots more. I even have my family members enabling Alexa skills on their Echo devices for all types of activities that they now cannot seem to live without. My mother, who is in a much older generation (please don’t tell her I said that), uses her Echo and the custom Alexa skill I built for her to store her baking recipes. She also enjoys exploring skills that have the latest health and epicurean information. It’s no wonder then, that when I go to work I feel like something is missing. For example, I would love to be able to ask Alexa to read my flash briefing when I get to the office.

For those of you that would love to have Alexa as your intelligent assistant at work, I have exciting news. I am delighted to announce Alexa for Business, a new service that enables businesses and organizations to bring Alexa into the workplace at scale. Alexa for Business not only brings Alexa into your workday to boost your productivity, but also provides tools and resources for organizations to set up and manage Alexa devices at scale, enable private skills, and enroll users.

Making Workplaces Smarter with Alexa for Business

Alexa for Business brings the Alexa you know and love into the workplace to help all types of workers to be more productive and organized on both personal and shared Echo devices. In the workplace, shared devices can be placed in common areas for anyone to use, and workers can use their personal devices to connect at work and at home.

End users can use shared devices or personal devices. Here’s what they can do from each.

Shared devices

  1. Join meetings in conference rooms: You can simply say “Alexa, start the meeting”. Alexa turns on the video conferencing equipment, dials into your conference call, and gets the meeting going.
  2. Help around the office: access custom skills to help with directions around the office, finding an open conference room, reporting a building equipment problem, or ordering new supplies.

Personal devices

  1. Enable calling and messaging: Alexa helps make phone calls, hands free and can also send messages on your behalf.
  2. Automatically dial into conference calls: Alexa can join any meeting with a conference call number via voice from home, work, or on the go.
  3. Intelligent assistant: Alexa can quickly check calendars, help schedule meetings, manage to-do lists, and set reminders.
  4. Find information: Alexa can help find information in popular business applications like Salesforce, Concur, or Splunk.

Here are some of the controls available to administrators:

  1. Provision & Manage Shared Alexa Devices: You can provision and manage shared devices around your workplace using the Alexa for Business console. For each device you can set a location, such as a conference room designation, and assign public and private skills for the device.
  2. Configure Conference Room Settings: Kick off your meetings with a simple “Alexa, start the meeting.” Alexa for Business allows you to configure your conference room settings so you can use Alexa to start your meetings and control your conference room equipment, or dial in directly from the Amazon Echo device in the room.
  3. Manage Users: You can invite users in your organization to enroll their personal Alexa account with your Alexa for Business account. Once your users have enrolled, you can enable your custom private skills for them to use on any of the devices in their personal Alexa account, at work or at home.
  4. Manage Skills: You can assign public skills and custom private skills your organization has created to your shared devices, and make private skills available to your enrolled users.  You can create skills groups, which you can then assign to specific shared devices.
  5. Build Private Skills & Use Alexa for Business APIs:  Dig into the Alexa Skills Kit and build your own skills.  Then you can make these available to the shared devices and enrolled users in your Alexa for Business account, all without having to publish them in the public Alexa Skills Store.  Alexa for Business offers additional APIs, which you can use to add context to your skills and automate administrative tasks.

Let’s take a quick journey into Alexa for Business. I’ll first log into the AWS Console and go to the Alexa for Business service.

Once I log in to the service, I am presented with the Alexa for Business dashboard. As you can see, I have access to manage Rooms, Shared devices, Users, and Skills, as well as the ability to control conferencing, calendars, and user invitations.

First, I’ll start by setting up my Alexa devices. Alexa for Business provides a Device Setup Tool to setup multiple devices, connect them to your Wi-Fi network, and register them with your Alexa for Business account. This is quite different from the setup process for personal Alexa devices. With Alexa for Business, you can provision 25 devices at a time.

Once my devices are provisioned, I can create location profiles for the locations where I want to put these devices (such as in my conference rooms). We call these locations “Rooms” in our Alexa for Business console. I can go to the Room profiles menu and create a Room profile. A Room profile contains common settings for the Alexa device in your room, such as the wake word for the device, the address, time zone, unit of measurement, and whether I want to enable outbound calling.

The next step is to enable skills for the devices I set up. I can enable any skill from the Alexa Skills store, or use the private skills feature to enable skills I built myself and made available to my Alexa for Business account. To enable skills for my shared devices, I can go to the Skills menu option and enable skills. After I have enabled skills, I can add them to a skill group and assign the skill group to my rooms.

Something I really like about Alexa for Business, is that I can use Alexa to dial into conference calls. To enable this, I go to the Conferencing menu option and select Add provider. At Amazon we use Amazon Chime, but you can choose from a list of different providers, or you can even add your own provider if you want to.

Once I’ve set this up, I can say “Alexa, join my meeting”; Alexa asks for my Amazon Chime meeting ID, after which my Echo device will automatically dial into my Amazon Chime meeting. Alexa for Business also provides an intelligent way to start any meeting quickly. We’ve all been in the situation where we walk into a meeting room and can’t find the meeting ID or conference call number. With Alexa for Business, I can link to my corporate calendar, so Alexa can figure out the meeting information for me, and automatically dial in – I don’t even need my meeting ID. Here’s how you do that:

Alexa can also control the video conferencing equipment in the room. To do this, all I need to do is select the skill for the equipment that I have, select the equipment provider, and enable it for my conference rooms. Now when I ask Alexa to join my meeting, Alexa will dial-in from the equipment in the room, and turn on the video conferencing system, without me needing to do anything else.

Let’s switch to enrolled users next.

I’ll start by setting up the User Invitation for my organization so that I can invite users to my Alexa for Business account. To allow a user to use Alexa for Business within an organization, you invite them to enroll their personal Alexa account with the service by sending a user invitation via email from the management console. If I choose, I can customize the user enrollment email to contain additional content. For example, I can add information about my organization’s Alexa skills that can be enabled after they’ve accepted the invitation and completed the enrollment process. My users must join in order to use the features of Alexa for Business, such as auto dialing into conference calls, linking their Microsoft Exchange calendars, or using private skills.

Now that I have customized my User Invitation, I will invite users to take advantage of Alexa for Business for my organization by going to the Users menu on the Dashboard and entering their email address.  This will send an email with a link that can be used to join my organization. Users will join using the Amazon account that their personal Alexa devices are registered to. Let’s invite Jeff Barr to join my Alexa for Business organization.

After Jeff has enrolled in my Alexa for Business account, he can discover the private skills I’ve enabled for enrolled users, and he can access his work skills and join conference calls from any of his personal devices, including the Echo in his home office.

Summary

We’ve only scratched the surface in our brief review of the Alexa for Business console and service features.  You can learn more about Alexa for Business by viewing the Alexa for Business website, website, reading the admin and API guides in the AWS documentation, documentation, or by watching the Getting Started videos within the Alexa for Business console.

You can learn more about Alexa for Business by viewing the Alexa for Business website, watching the Alexa for Business overview video, reading the admin and API guides in the AWS documentation, or by watching the Getting Started videos within the Alexa for Business console.

Alexa, Say Goodbye and Sign off the Blog Post.”

Tara 

AWS Cloud9 – Cloud Developer Environments

The content below is taken from the original ( AWS Cloud9 – Cloud Developer Environments), to continue reading please visit the site. Remember to respect the Author & Copyright.

One of the first things you learn when you start programming is that, just like any craftsperson, your tools matter. Notepad.exe isn’t going to cut it. A powerful editor and testing pipeline supercharge your productivity. I still remember learning to use Vim for the first time and being able to zip around systems and complex programs. Do you remember how hard it was to setup all your compilers and dependencies on a new machine? How many cycles have you wasted matching versions, tinkering with configs, and then writing documentation to onboard a new developer to a project?

Today we’re launching AWS Cloud9, an Integrated Development Environment (IDE) for writing, running, and debugging code, all from your web browser. Cloud9 comes prepackaged with essential tools for many popular programming languages (Javascript, Python, PHP, etc.) so you don’t have to tinker with installing various compilers and toolchains. Cloud9 also provides a seamless experience for working with serverless applications allowing you to quickly switch between local and remote testing or debugging. Based on the popular open source Ace Editor and c9.io IDE (which we acquired last year), AWS Cloud9 is designed to make collaborative cloud development easy with extremely powerful pair programming features. There are more features than I could ever cover in this post but to give a quick breakdown I’ll break the IDE into 3 components: The editor, the AWS integrations, and the collaboration.

Editing


The Ace Editor at the core of Cloud9 is what lets you write code quickly, easily, and beautifully. It follows a UNIX philosophy of doing one thing and doing it well: writing code.

It has all the typical IDE features you would expect: live syntax checking, auto-indent, auto-completion, code folding, split panes, version control integration, multiple cursors and selections, and it also has a few unique features I want to highlight. First of all, it’s fast, even for large (100000+ line) files. There’s no lag or other issues while typing. It has over two dozen themes built-in (solarized!) and you can bring all of your favorite themes from Sublime Text or TextMate as well. It has built-in support for 40+ language modes and customizable run configurations for your projects. Most importantly though, it has Vim mode (or emacs if your fingers work that way). It also has a keybinding editor that allows you to bend the editor to your will.

The editor supports powerful keyboard navigation and commands (similar to Sublime Text or vim plugins like ctrlp). On a Mac, with ⌘+P you can open any file in your environment with fuzzy search. With ⌘+. you can open up the command pane which allows you to do invoke any of the editor commands by typing the name. It also helpfully displays the keybindings for a command in the pane, for instance to open to a terminal you can press ⌥+T. Oh, did I mention there’s a terminal? It ships with the AWS CLI preconfigured for access to your resources.

The environment also comes with pre-installed debugging tools for many popular languages – but you’re not limited to what’s already installed. It’s easy to add in new programs and define new run configurations.

The editor is just one, admittedly important, component in an IDE though. I want to show you some other compelling features.

AWS Integrations

The AWS Cloud9 IDE is the first IDE I’ve used that is truly “cloud native”. The service is provided at no additional charge, and you only charged for the underlying compute and storage resources. When you create an environment you’re prompted for either: an instance type and an auto-hibernate time, or SSH access to a machine of your choice.

If you’re running in AWS the auto-hibernate feature will stop your instance shortly after you stop using your IDE. This can be a huge cost savings over running a more permanent developer desktop. You can also launch it within a VPC to give it secure access to your development resources. If you want to run Cloud9 outside of AWS, or on an existing instance, you can provide SSH access to the service which it will use to create an environment on the external machine. Your environment is provisioned with automatic and secure access to your AWS account so you don’t have to worry about copying credentials around. Let me say that again: you can run this anywhere.

Serverless Development with AWS Cloud9

I spend a lot of time on Twitch developing serverless applications. I have hundreds of lambda functions and APIs deployed. Cloud9 makes working with every single one of these functions delightful. Let me show you how it works.


If you look in the top right side of the editor you’ll see an AWS Resources tab. Opening this you can see all of the lambda functions in your region (you can see functions in other regions by adjusting your region preferences in the AWS preference pane).

You can import these remote functions to your local workspace just by double-clicking them. This allows you to edit, test, and debug your serverless applications all locally. You can create new applications and functions easily as well. If you click the Lambda icon in the top right of the pane you’ll be prompted to create a new lambda function and Cloud9 will automatically create a Serverless Application Model template for you as well. The IDE ships with support for the popular SAM local tool pre-installed. This is what I use in most of my local testing and serverless development. Since you have a terminal, it’s easy to install additional tools and use other serverless frameworks.

 

Launching an Environment from AWS CodeStar

With AWS CodeStar you can easily provision an end-to-end continuous delivery toolchain for development on AWS. Codestar provides a unified experience for building, testing, deploying, and managing applications using AWS CodeCommit, CodeBuild, CodePipeline, and CodeDeploy suite of services. Now, with a few simple clicks you can provision a Cloud9 environment to develop your application. Your environment will be pre-configured with the code for your CodeStar application already checked out and git credentials already configured.

You can easily share this environment with your coworkers which leads me to another extremely useful set of features.

Collaboration

One of the many things that sets AWS Cloud9 apart from other editors are the rich collaboration tools. You can invite an IAM user to your environment with a few clicks.

You can see what files they’re working on, where their cursors are, and even share a terminal. The chat features is useful as well.

Things to Know

  • There are no additional charges for this service beyond the underlying compute and storage.
  • c9.io continues to run for existing users. You can continue to use all the features of c9.io and add new team members if you have a team account. In the future, we will provide tools for easy migration of your c9.io workspaces to AWS Cloud9.
  • AWS Cloud9 is available in the US West (Oregon), US East (Ohio), US East (N.Virginia), EU (Ireland), and Asia Pacific (Singapore) regions.

I can’t wait to see what you build with AWS Cloud9!

Randall

Say Farewell to Putty as Microsoft adds an OpenSSH Client to Windows 10

The content below is taken from the original ( Say Farewell to Putty as Microsoft adds an OpenSSH Client to Windows 10), to continue reading please visit the site. Remember to respect the Author & Copyright.

If you need a quick OpenSSH client or server for Windows 10, there is a beta client hidden and available for installation

The post Say Farewell to Putty as Microsoft adds an OpenSSH Client to Windows 10 appeared first on ServeTheHome.

Magnesium batteries could be safer and more efficient than lithium

The content below is taken from the original ( Magnesium batteries could be safer and more efficient than lithium), to continue reading please visit the site. Remember to respect the Author & Copyright.

It’s still early days for the promise of safer, energy-dense solid-state rechargeable batteries. However, a team of scientists at the Joint Center for Energy Storage Research have just discovered a fast magnesium-ion solid-state conductor that will g…

Store files ‘in’ the internet

pingfs - "True cloud storage"
	by Erik Ekman <[email protected]>

pingfs is a filesystem where the data is stored only in the Internet itself,
as ICMP Echo packets (pings) travelling from you to remote servers and
back again.

https://github.com/yarrick/pingfs

Eight best smart turbo trainers for 2017/2018

The content below is taken from the original ( Eight best smart turbo trainers for 2017/2018), to continue reading please visit the site. Remember to respect the Author & Copyright.

Your definitive guide to the smart turbo trainer, what they are, what they can do and where to find the best

smart turbo trainer

Your definitive guide to the smart turbo trainer, what they are, what they can do and where to find the best ones

Keeping Time With Amazon Time Sync Service

The content below is taken from the original ( Keeping Time With Amazon Time Sync Service), to continue reading please visit the site. Remember to respect the Author & Copyright.

Today we’re launching Amazon Time Sync Service, a time synchronization service delivered over Network Time Protocol (NTP) which uses a fleet of redundant satellite-connected and atomic clocks in each region to deliver a highly accurate reference clock. This service is provided at no additional charge and is immediately available in all public AWS regions to all instances running in a VPC.

You can access the service via the link local 169.254.169.123 IP address. This means you don’t need to configure external internet access and the service can be securely accessed from within your private subnets.

Setup

Chrony is a different implementation of NTP than what ntpd uses and it’s able to synchronize the system clock faster and with better accuracy than ntpd. I’d recommend using Chrony unless you have a legacy reason to use ntpd.

Installing and configuring chrony on Amazon Linux is as simple as:


sudo sudo yum erase ntp*
sudo yum -y install chrony
sudo service chronyd start

Alternatively, just modify your existing NTP config by adding the line server 169.254.169.123 prefer iburst.

On Windows you can run the following commands in PowerShell or a command prompt:


net stop w32time
w32tm /config /syncfromflags:manual /manualpeerlist:"169.254.169.123"
w32tm /config /reliable:yess
net start w32time

Leap Seconds

Time is hard. Science, and society, measure time with respect to the International Celestial Reference Frame (ICRF), which is computed using long baseline interferometry of distant quasars, GPS satellite orbits, and laser ranging of the moon (cool!). Irregularities in Earth’s rate of rotation cause UTC to drift from time with respect to the ICRF. To address this clock drift the International Earth Rotation and Reference Systems (IERS) occasionally introduce an extra second into UTC to keep it within 0.9 seconds of real time.

Leap seconds are known to cause application errors and this can be a concern for many savvy developers and systems administrators. The 169.254.169.123 clock smooths out leap seconds some period of time (commonly called leap smearing) which makes it easy for your applications to deal with leap seconds.

This timely update should provide immediate benefits to anyone previously relying on an external time synchronization service.

Randall

Amazon is putting Alexa in the office

The content below is taken from the original ( Amazon is putting Alexa in the office), to continue reading please visit the site. Remember to respect the Author & Copyright.

 The interface is evolving. What has long been dominated by screens of all shapes and sizes is now being encroached upon by the voice. And while many companies are building voice interfaces — Apple with Siri, Google with Assistant, and Microsoft with Cortana — none are quite as dominant as Amazon has been with Alexa.
At the AWS reinvent conference, Amazon will announce Alexa for… Read More

Alexa and Echo will land in Australia and NZ in early 2018

The content below is taken from the original ( Alexa and Echo will land in Australia and NZ in early 2018), to continue reading please visit the site. Remember to respect the Author & Copyright.

Amazon just dropped its umpteenth Alexa skill, this time for Destiny 2 fans. Already in the tens of thousands, the digital assistant's tricks span shopping, news, smart home controls, pop trivia, kiddie pastimes, and now video games. But while a grow…

Uploading to Azure Web Apps Using FTP

The content below is taken from the original ( Uploading to Azure Web Apps Using FTP), to continue reading please visit the site. Remember to respect the Author & Copyright.

In this post, I’m going to show you how you can upload your web content to an Azure web app using FTP.

 

 

Upload My Code!

What good is a web hosting plan if you cannot put your website code on it? Azure offers a few ways to get code into an Azure web app or app service from automated solutions using the like of Visual Studio Team Services (VSTS) and GitHub to a more basic option such as using FTP.

In this post, I will show you how to use an FTP client to upload your website into a web app. My web app is called preprod. It’s actually a pre-production deployment slot for a web app called petri, which has its own FTP configuration.

Configure FTP Account

Each web app and deployment slot has its own FTP username and address. You must enter a new password to use this FTP account, which is known in Azure as a deployment credential.

 

Sponsored

 

To set up the FTP user account, open the web app and browse to Deployment Credentials under the Deployment settings. Here you can specify the user account name and the password. Please note that the password:

  • Must be between 8 and 60 characters long and longer is better.
  • Must have at least two of the following: uppercase letters, lowercase letters, and numbers.
Configuring the Azure web app FTP account in Deployment Credentials [Image Credit: Aidan Finn]

Configuring the Azure Web App FTP Account in Deployment Credentials [Image Credit: Aidan Finn]

Note that the FTP/Deployment Username is not the complete username that you will require in your FTP client. You will retrieve that in the next step.

Retrieve FTP Details

You will need a server name or address to connect to with your FTP client; this can be found in the Overview of your web app or deployment slot. Note that you can find the FTP and FTPS addresses here.

You will also get the complete FTP username. In the previous step, I set the Deployment Username to petriadmin2. However, the actual username that I need to enter in my FTP client is shown below: petri__preprod\petriadmin2. It is named after the web app and deployment slot.

The FTP address of the Azure web app [Image Credit: Aidan Finn]

The FTP Address of the Azure Web App [Image Credit: Aidan Finn]

FTP Client

I have installed an FTP client on my PC and created a new connection. I entered the following information into the New Site dialog:

  • The FTP Hostname from the web app’s overview into the Host/IP/URL box.
  • The FTP/Deployment Username from the web app’s overview into the Username box.
  • The password that I set in the web app’s Deployment Credentials into the Password box.
Connecting to the Azure web app using FTP [Image Credit: Aidan Finn]

Connecting to the Azure Web App Using FTP [Image Credit: Aidan Finn]

When I connect to the website, I can browse the web host’s file structure. You can see the familiar wwwroot folder from IIS in the below screenshot; this is where I will upload my web content.

Browsing the web app folder structure using FTP [Image Credit: Aidan Finn]

Browsing the Web App Folder Structure Using FTP [Image Credit: Aidan Finn]

I can now use the FTP tool to upload and download content to the web app’s folder structure. I’ve already extracted the website content on my PC and it’s an easy upload to the wwwroot folder from there.

 

Sponsored

Testing the Site

The default document of my site is index.php. I should verify that the Application Settings of the web app or deployment slot have the following:

  • php is one of the default documents.
  • It is also higher priority than the default document of the web app (hostingstart.html).

Now, I can browse to the URL of the web app or deployment slot and my web content should load.

The post Uploading to Azure Web Apps Using FTP appeared first on Petri.

Introducing an easy way to deploy containers on Google Compute Engine virtual machines

The content below is taken from the original ( Introducing an easy way to deploy containers on Google Compute Engine virtual machines), to continue reading please visit the site. Remember to respect the Author & Copyright.

Containers are a popular way to deploy software thanks to their lightweight size and resource requirements, dependency isolation and portability. Today, we’re introducing an easy way to deploy and run containers on Google Compute Engine virtual machines and managed instance groups. This feature, which is currently in beta, allows you to take advantage of container deployment consistency while staying in your familiar IaaS environment.

Now you can easily deploy containers wherever you may need them on Google Cloud: Google Kubernetes Engine for multi-workload, microservice friendly container orchestration, Google App Engine flexible environment, a fully managed application platform, and now Compute Engine for VM-level container deployment.

Running containers on Compute Engine instances is handy in a number of scenarios: when you need to optimize CI/CD pipeline for applications running on VMs, finetune VM shape and infrastructure configuration for a specialized workload, integrate a containerized application into your existing IaaS infrastructure or launch a one-off instance of an application.

To run your container on a VM instance, or a managed instance group, simply provide an image name and specify your container runtime options when creating a VM or an instance template. Compute Engine takes care of the rest including supplying an up-to-date Container-Optimized OS image with Docker and starting the container upon VM boot with your runtime options.

You can now easily use containers without having to write startup scripts or learn about container orchestration tools, and can migrate to full container orchestration with Kubernetes Engine when you’re ready. Better yet, standard Compute Engine pricing applies VM instances running containers cost the same as regular VMs.

How to deploy a container to a VM

To see the new container deployment method in action, let’s deploy an NGINX HTTP server to a virtual machine. To do this, you only need to configure three settings when creating a new instance:

  • Check Deploy a container image to this VM instance.
  • Provide Container image name. 
  • Check Allow HTTP traffic so that the VM instance can receive HTTP requests on port 80. 

Here’s how the flow looks in Google Cloud Console:

Run a container from the gcloud command line

You can run a container on a VM instance with just one gcloud command:

gcloud beta compute instances create-with-container nginx-vm \
  --container-image http://bit.ly/2neALil \
  --tags http-server

Then, create a firewall rule to allow HTTP traffic to the VM instance so that you can see the NGINX welcome page:

gcloud compute firewall-rules create allow-http \
  --allow=tcp:80 --target-tags=http-server

To update such a container is just as easy:

gcloud beta compute instances update-container nginx-vm \
  --container-image http://bit.ly/2zDEWpu

Run a container on a managed instance group

With managed instance groups, you can take advantage of VM-level features like autoscaling, automatic recreation of unhealthy virtual machines, rolling updates, multi-zone deployments and load balancing. Running containers on managed instance groups is just as easy as on individual VMs and takes only two steps: (1) create an instance template and (2) create a group.

Let’s deploy the same NGINX server to a managed instance group of three virtual machines.

Step 1: Create an instance template with a container.

gcloud beta compute instance-templates create-with-container nginx-it \
  --container-image http://bit.ly/2neALil \
  --tags http-server

The http-server tag allows HTTP connections to port 80 of the VMs, created from the instance template. Make sure to keep the firewall rule from the previous example.

Step 2: Create a managed instance group.

gcloud compute instance-groups managed create nginx-mig \
  --template nginx-it \
  --size 3

The group will have three VM instances, each running the NGINX container.

Get started!

Interested in deploying containers on Compute Engine VM instances or managed instance groups? Take a look at the detailed step-by-step instructions and learn how to configure a range of container runtime options including environment variables, entrypoint command with parameters and volume mounts. Then, help us help you make using containers on Compute Engine even easier! Send your feedback, questions or requests to [email protected].

Sign up for Google Cloud today and get $300 in credits to try out running containers directly on Compute Engine instances.

Amazon EC2 Bare Metal Instances with Direct Access to Hardware

The content below is taken from the original ( Amazon EC2 Bare Metal Instances with Direct Access to Hardware), to continue reading please visit the site. Remember to respect the Author & Copyright.

When customers come to us with new and unique requirements for AWS, we listen closely, ask lots of questions, and do our best to understand and address their needs. When we do this, we make the resulting service or feature generally available; we do not build one-offs or “snowflakes” for individual customers. That model is messy and hard to scale and is not the way we work.

Instead, every AWS customer has access to whatever it is that we build, and everyone benefits. VMware Cloud on AWS is a good example of this strategy in action. They told us that they wanted to run their virtualization stack directly on the hardware, within the AWS Cloud, giving their customers access to the elasticity, security, and reliability (not to mention the broad array of services) that AWS offers.

We knew that other customers also had interesting use cases for bare metal hardware and didn’t want to take the performance hit of nested virtualization. They wanted access to the physical resources for applications that take advantage of low-level hardware features such as performance counters and Intel® VT that are not always available or fully supported in virtualized environments, and also for applications intended to run directly on the hardware or licensed and supported for use in non-virtualized environments.

Our multi-year effort to move networking, storage, and other EC2 features out of our virtualization platform and into dedicated hardware was already well underway and provided the perfect foundation for a possible solution. This work, as I described in Now Available – Compute-Intensive C5 Instances for Amazon EC2, includes a set of dedicated hardware accelerators.

Now that we have provided VMware with the bare metal access that they requested, we are doing the same for all AWS customers. I’m really looking forward to seeing what you can do with them!

New Bare Metal Instances
Today we are launching a public preview the i3.metal instance, the first in a series of EC2 instances that offer the best of both worlds, allowing the operating system to run directly on the underlying hardware while still providing access to all of the benefits of the cloud. The instance gives you direct access to the processor and other hardware, and has the following specifications:

  • Processing – Two Intel Xeon E5-2686 v4 processors running at 2.3 GHz, with a total of 36 hyperthreaded cores (72 logical processors).
  • Memory – 512 GiB.
  • Storage – 15.2 terabytes of local, SSD-based NVMe storage.
  • Network – 25 Gbps of ENA-based enhanced networking.

Bare Metal instances are full-fledged members of the EC2 family and can take advantage of Elastic Load Balancing, Auto Scaling, Amazon CloudWatch, Auto Recovery, and so forth. They can also access the full suite of AWS database, IoT, mobile, analytics, artificial intelligence, and security services.

Previewing Now
We are launching a public preview of the Bare Metal instances today; please sign up now if you want to try them out.

You can now bring your specialized applications or your own stack of virtualized components to AWS and run them on Bare Metal instances. If you are using or thinking about using containers, these instances make a great host for CoreOS.

An AMI that works on one of the new C5 instances should also work on an I3 Bare Metal Instance. It must have the ENA and NVMe drivers, and must be tagged for ENA.

Jeff;

 

H1 Instances – Fast, Dense Storage for Big Data Applications

The content below is taken from the original ( H1 Instances – Fast, Dense Storage for Big Data Applications), to continue reading please visit the site. Remember to respect the Author & Copyright.

The scale of AWS and the diversity of our customer base gives us the opportunity to create EC2 instance types that are purpose-built for many different types of workloads. For example, a number of popular big data use cases depend on high-speed, sequential access to multiple terabytes of data. Our customers want to build and run very large MapReduce clusters, host distributed file systems, use Apache Kafka to process voluminous log files, and so forth.

New H1 Instances
The new H1 instances are designed specifically for this use case. In comparison to the existing D2 (dense storage) instances, the H1 instances provide more vCPUs and more memory per terabyte of local magnetic storage, along with increased network bandwidth, giving you the power to address more complex challenges with a nicely balanced mix of resources.

The instances are based on Intel Xeon E5-2686 v4 processors running at a base clock frequency of 2.3 GHz and come in four instance sizes (all VPC-only and HVM-only):

Instance Name vCPUs
RAM
Local Storage Network Bandwidth
h1.2xlarge 8 32 GiB 2 TB Up to 10 Gbps
h1.4xlarge 16 64 GiB 4 TB Up to 10 Gbps
h1.8xlarge 32 128 GiB 8 TB 10 Gbps
h1.16xlarge 64 256 GiB 16 TB 25 Gbps

The two largest sizes support Intel Turbo and CPU power management, with all-core Turbo at 2.7 GHz and single-core Turbo at 3.0 GHz.

Local storage is optimized to deliver high throughput for sequential I/O; you can expect to transfer up to 1.15 4.5 gigabytes per second if you use a 2 megabyte block size. The storage is encrypted at rest using 256-bit XTS-AES and one-time keys.

Moving large amounts of data on and off of these instances is facilitated by the use of Enhanced Networking, giving you up to 25 Gbps of network bandwith within Placement Groups.

Launch One Today
H1 instances are available today in the US East (Northern Virginia), US West (Oregon), US East (Ohio), and EU (Ireland) Regions. You can launch them in On-Demand or Spot Form. Dedicated Hosts, Dedicated Instances, and Reserved Instances (both 1-year and 3-year) are also available.

Jeff;

Folders: a powerful tool to manage cloud resources

The content below is taken from the original ( Folders: a powerful tool to manage cloud resources), to continue reading please visit the site. Remember to respect the Author & Copyright.

Today we’re excited to announce general availability of folders in Cloud Resource Manager, a powerful tool to organize and administer cloud resources. This feature gives you the flexibility to map resources to your organizational structure and enable more granular access control and configuration for those resources.

Folders can be used to represent different departments, teams, applications or environments in your organization. With folders, you can give teams and departments the agility to delegate administrative rights and enable them to run independently.

Folders help you scale by enabling you to organize and manage their resources hierarchically. By enforcing Identity and Access Management (IAM) policies on folders, admins can delegate control over parts of the resource hierarchy to the appropriate teams. Using organization-level IAM roles in conjunction with folders, you can maintain full visibility and control over the entire organization without needing to be directly involved in every operation.

“Our engineering team manages several hundred projects within GCP, and the resource hierarchy makes it easy to handle the growing complexity of our environment. We classify projects based on criteria such as department, geography, product, and data sensitivity to ensure the right people have access to the right information. With folders, we have the flexibility we need to organize our resources and manage access control policies based on those criteria.” 

Alex Olivier, Technical Product Manager, Qubit

Folders establish trust boundaries between resources. By assigning Cloud IAM roles to folders, you can help isolate and protect production critical workloads while still allowing your teams to create and work freely. For example, you could grant a Project Creator role to the entire team on the Test folder, but only assign the Log Viewer role on the Production folder, so that users can do necessary debugging without the risk of compromising critical components.

The combination of organization policy and folders lets you define organization-level configurations and create exceptions for subtrees of the resource hierarchy. For example, you can constrain access to an approved set of APIs across the organization for compliance reasons, but create an exception for a Test folder, where a broader set of APIs is allowed for testing purposes.

Folders are easy to use and, as any other resource in GCP, they can be managed via API, gcloud and the Cloud Console UI. Watch this demo to learn how to incorporate folders into your GCP hierarchy.


To learn more about folders, read the beta launch blog post or the documentation.

!OBrowse reviewed

The content below is taken from the original ( !OBrowse reviewed), to continue reading please visit the site. Remember to respect the Author & Copyright.

!OBrowse was originally released as a ‘freebie/thank-you’ to anyone who had put money into RISC OS developments. It has been updated for the London Show and was also available for sale at 40 pounds (providing a simple way for people who wanted to contribute smaller amounts to the project).

!OBrowse is a front-end for the !Otter port for RISC OS. As R-Comp were very keen to stress, !Otter is not their project and is a free release which no-one has to pay for. What they have written is some code which allows you to run the software in a much more RISC OS friendly way. They are offering their front end as a way to finance their plans (which will also be available as free software to anyone). They have already made an impact by funding one of the ROOL bounties to bring RISC OS networking support up to date.

!OBrowse is not tied to any release of !Otter and does not ‘add’ any additional functionality. What it does do is make !Otter into a much more compliant and better behaved RISC OS application. It has a proper iconbar entry, can be run to open HTML files or a URL. It can take over all the protocols which use a browser, supports global clipboard, drag and drop, etc It does this role very well and you get a polished RISC OS application which works as you would expect and plays nicely with the rest of the system.

If you are an investor or want to support RISCOS Developments, it is a very nice to have application installed and a no brainer. If you just want to experiment with accessing the internet under RISC OS, !Otter will run perfectly well on your machine without it.

10 No comments in forum

NetApp’s back, baby, flaunting new tech and Azure cloud swagger

The content below is taken from the original ( NetApp’s back, baby, flaunting new tech and Azure cloud swagger), to continue reading please visit the site. Remember to respect the Author & Copyright.

By George (Kurian), he’s done it

Analysis There’s a new energy at NetApp. The Microsoft Azure NFS deal was a great confidence booster, and the two recent acquisitions of Greenqloud and Plexistor provide stepping stones to a high-performance, on-premises storage future and a stronger hybrid cloud play.…

Hacking the IKEA Trådfri Light Bulb

The content below is taken from the original ( Hacking the IKEA Trådfri Light Bulb), to continue reading please visit the site. Remember to respect the Author & Copyright.

[BasilFX] wanted to shoehorn custom firmware onto his IKEA Trådfri light bulb. The product consists of a GU10-size light bulb with a LED driver as well as IKEA’s custom ZigBee module controlling it all. A diffuser, enclosure shell, and Edison-screw base give the whole thing the same form factor as a standard A-series bulb. The Trådfri module, which ties together IKEA’s home automation products, consists of an ARM Cortex M4 MCU with integrated 2.4Ghz radio and 256 Kb of flash — not bad for 7 euros!

Coincidentally, [BasilFX] had just contributed EFM32 support to RIOT-OS (“the friendly OS for IoT”) so he was already halfway there. He used a JTAG/SWD-compatible debugger to flash the chip on the light bulb while the chip was still attached.

[BasilFX] admits the whole project is a proof of concept with no real use yet, though he has turned his eye toward getting the radio to work, with a goal of creating a network of light bulbs. You can find more info on his code repository.

We ran a post on Trådfri hacking earlier this year, as well as one on the reverse-engineering process used to suss out the bulb’s secrets.

Filed under: home hacks

DNS resolver 9.9.9.9 will check requests against IBM threat database

The content below is taken from the original ( DNS resolver 9.9.9.9 will check requests against IBM threat database), to continue reading please visit the site. Remember to respect the Author & Copyright.

Group Co-founded by City of London Police promises ‘no snooping on your requests’

The Global Cyber Alliance has given the world a new free Domain Name Service resolver, and advanced it as offering unusually strong security and privacy features.…

Roomba gets IFTTT functionality

The content below is taken from the original ( Roomba gets IFTTT functionality), to continue reading please visit the site. Remember to respect the Author & Copyright.

 iRobot’s been talking a lot about its plans to make the Roomba an essential part of the connected home. The process has been a bit slow going — the company added WiFi connectivity in 2015 and Alexa functionality this year — but it’s getting there, slowly but surely. Today, the world’s best selling robotic vacuum takes another important step with the addition of… Read More

Get notified when Azure service incidents impact your resources

The content below is taken from the original ( Get notified when Azure service incidents impact your resources), to continue reading please visit the site. Remember to respect the Author & Copyright.

When an Azure service incident affects you, we know that it is critical that you are equipped with all the information necessary to mitigate any potential impact. The goal for the Azure Service Health preview is to provide timely and personalized information when needed, but how can you be sure that you are made aware of these issues?

Today we are happy to announce a set of new features for creating and managing Service Health alerts. Starting today, you can:

  • Easily create and manage alerts for service incidents, planned maintenance, and health advisories.
  • Integrate your existing incident management system like ServiceNow®, PagerDuty, or OpsGenie with Service Health alerts via webhook.

So, let’s walk through these experiences and show you how it all works!

Creating alerts during a service incident

Let’s say you visit the Azure Portal, and you notice that your personalized Azure Service Health map is showing some issues with your services. You can gain access to the specific details of the event by clicking on the map, which takes you to your personalized health dashboard. Using this information, you are able to warn your engineering team and customers about the impact of the service incident.

 

Azure Service Health map

Notification

If you have not pinned a map to your dashboard yet, check out these simple steps.

In this instance, you noticed the health status of your services passively. However, the question you really want answered is, “How can I get notified the next time an event like this occurs?” Using a single click, you can create a new alert based on your existing filters.

Create service health alert

Click the “Create service health alert” button, and a new alert creation blade will appear, prepopulated with the filter settings you selected before. Name the alert, and quickly ensure that the other settings are as you expect. Finally, create a new action group to notify when this alert fires, or use an existing group set up in the past.

Add activity log alert

Once you click “OK”, you will be brought back to the health dashboard with a confirmation that your new alert was successfully created!

Create and manage existing Service Health alerts

In the Health Alerts section, you can find all your new and existing Service Health alerts. If you click on an alert, you will see that it contains details about the alert criteria, notification settings, and even a historical log of when this alert has fired in the past. If you want to make edits to your new or existing Service Health alert, you can select the more options button (“…”) and immediately get access to manage your alert.

Create and manage exisiting service health alerts

During this process, you might think of other alerts you want to set up, so we make it easy for you to create new alerts by clicking the “Create New Alert” button, which gives you a blank canvas to set up your new notifications.

Configure health notifications for existing incident management systems via webhook

Some of you may already have an existing incident management system like ServiceNow, PagerDuty, or OpsGenie which contains all of your notification groups and incident management systems. We have worked with engineers from these companies to bring direct support for our Service Health webhook notifications making the end to end integration simple for you. Even if you use another incident management solution, we have written details about the Service Health webhook payload, and suggestions for how you might set up an integration on your own. For complete documentation on all of these options, you can review our instructions.

Each of the different incident management solutions will give you a unique webhook address that you can add to the action group for your Service Health alerts:

Webhook

Once the alert fires, your respective incident management system will automatically ingest and parse the data to make it simple for you to understand!

ServiceNow

Special thanks to the following people who helped us make this so simple for you all:

  • David Shackelford and David Cooper from PagerDuty
  • Çağla Arıkan and Berkay Mollamustafaoglu from OpsGenie
  • Manisha Arora and Sheeraz Memon from ServiceNow

Closing

I hope you can see how our updates to the Azure Service Health preview bring you that much closer to the action when an Azure service incident affects you. We are excited to continually bring you better experiences and would love any and all feedback you can provide. Reach out to me or leave feedback right in the portal. We look forward to seeing what you all create!

 

– Shawn Tabrizi (@shawntabrizi)

Launching preview of Azure Migrate

The content below is taken from the original ( Launching preview of Azure Migrate), to continue reading please visit the site. Remember to respect the Author & Copyright.

At Microsoft Ignite 2017, we announced Azure Migrate – a new service that provides guidance, insights, and mechanisms to assist you in migrating to Azure. We made the service available in limited preview, so you could request access, try out, and provide feedback. We are humbled by the response received and thankful for the time you took to provide feedback.

Today, we are excited to launch the preview of Azure Migrate. The service is now broadly available and there is no need to request access.

Azure Migrate enables agentless discovery of VMware-virtualized Windows and Linux virtual machines (VMs). It also supports agent-based discovery. This enables dependency visualization, for a single VM or a group of VMs, to easily identify multi-tier applications.

Application-centric discovery is a good start but not enough to make an informed decision. So, Azure Migrate enables quick assessments that help answer three questions:

  • Readiness: Is a VM suitable for running in Azure?
  • Rightsizing: What is the right Azure VM size based on utilization history of CPU, memory, disk (throughput and IOPS), and network?
  • Cost: How much is the recurring Azure cost considering discounts like Azure Hybrid Benefit?

The assessment doesn’t stop there. It also suggests workload-specific migration services. For example, Azure Site Recovery (ASR) for servers and Azure Database Migration Service (DMS) for databases. ASR enables application-aware server migration with minimal-downtime and no-impact migration testing. DMS provides a simple, self-guided solution for moving on-premises SQL databases to Azure.

Once migrated, you want to ensure that your VMs stay secure and well-managed. For this, you can use various other Azure offerings like Azure Security Center, Azure Cost Management, Azure Backup, etc.

Azure Migrate is offered at no additional charge, supported for production deployments, and available in West Central US region. It is worthwhile to note that availability of Azure Migrate in a particular region does not affect your ability to plan migrations for other target regions. For example, even if a migration project is created in West Central US, the discovered VMs can be assessed for West US 2 or UK West or Japan East.

You can get started by creating a migration project in Azure portal:

Azure Migrate (preview)

You can also:

  • Get and stay informed by referring documentation.
  • Seek help by posting a question on forum or contacting Microsoft Support.
  • Provide feedback by posting (or voting for) an idea on user voice.

 

Hope you will find Azure Migrate useful in your journey to Azure!

The Best PC Games of the 1990s

The content below is taken from the original ( The Best PC Games of the 1990s), to continue reading please visit the site. Remember to respect the Author & Copyright.

Given the prominence of PC gaming these days, it is easy to see how some believe the platform is enjoying a golden age. This belief isn’t meritless. With lower-priced parts, the continued ease […]

The post The Best PC Games of the 1990s appeared first on Geek.com.