A new wearable generator creates electricity from body heat

The content below is taken from the original (A new wearable generator creates electricity from body heat), to continue reading please visit the site. Remember to respect the Author & Copyright.

Now your sweaty body can power your phone. Like Neo in the Matrix, a new system created by researchers at North Carolina State University lets you generate electricity with a wearable device. Previous systems used massive, rigid heat sinks. This system uses a body-conforming patch that can generate 20 μW per centimeter squared. Previous systems generated only 1 microwatt or less.

The system consists of a conducive layer that sits on the skin and prevents head from escaping. The head moves through a thermoelectric generator and then moves into an outer layer that completely dissipates outside the body. It is 2mm thick and flexible.

The system, which is part of the National Science Foundation’s Nanosystems Engineering Research Center for Advanced Self-Powered Systems of Integrated Sensors and Technologies (ASSIST), has a clear path to commercialization.

The goal is to embed these into health tools that can measure your vital signs without needing to be recharged. “The goal of ASSIST is to make wearable technologies that can be used for long-term health monitoring, such as devices that track heart health or monitor physical and environmental variables to predict and prevent asthma attacks,” said researcher Daryoosh Vashaee, an associate professor at NC State. “To do that, we want to make devices that don’t rely on batteries. And we think this design and prototype moves us much closer to making that a reality.”

RadiantOne Consolidates Active Directory Domains/Forests Into Azure AD

The content below is taken from the original (RadiantOne Consolidates Active Directory Domains/Forests Into Azure AD), to continue reading please visit the site. Remember to respect the Author & Copyright.

Today, Radiant Logic announced support for Azure AD, further strengthening the value of its flagship RadiantOne Federated Identity Service for the… Read more at VMblog.com.

KEMP 360 Cloud Powers High-Performance, Cost-Effective Application Migration to the Cloud

The content below is taken from the original (KEMP 360 Cloud Powers High-Performance, Cost-Effective Application Migration to the Cloud), to continue reading please visit the site. Remember to respect the Author & Copyright.

KEMP Technologies today announced the launch of KEMP 360 Cloud, a next-generation application delivery framework that helps customers migrate to… Read more at VMblog.com.

Could machine learning help Google’s cloud catch up to AWS and Azure?

The content below is taken from the original (Could machine learning help Google’s cloud catch up to AWS and Azure?), to continue reading please visit the site. Remember to respect the Author & Copyright.

Google has been offering public cloud services for several years now, but the company has continued to lag behind Amazon and Microsoft in customer growth. 

Under the leadership of VMware co-founder Diane Greene, who serves as the executive vice president of Google Cloud Enterprise, the tech titan has focused harder on forging partnerships and developing products to appeal to large customers. It has added a number of key customers under Greene’s tenure, including Spotify.  

One such win is Evernote, which announced Tuesday it would be migrating its service away from its private data centers and to Google’s public cloud. When Evernote was looking for a public cloud provider, the company was interested in not only the base level infrastructure available, but also high-level machine learning services and services for building machine learning-driven systems, said Anirban Kundu, Evernote’s CTO.

Tools like Google’s Cloud Natural Language API, a service that helps parse human language into something that a program could understand, were a major draw for Evernote. In fact, those high-level machine learning tools were enough to knock Amazon Web Services — often considered the leading public cloud platform — out of contention for Evernote’s business.

“AWS, they’re strong on the machine learning infrastructure side of things, but on the machine learning applications, we thought Google was definitely head-and-shoulders above them,” Kundu said in an interview.

Kundu’s assessment of Amazon’s machine learning capabilities was echoed on stage at TechCrunch Disrupt by LinkedIn co-founder Reid Hoffman, who said that Google and Microsoft were the leaders in cloud machine learning, while AWS is “working to get there.”

Moreover, Kundu said that Google’s services were more accurate than its remaining competition for the sort of tests that Evernote ran on them. In addition to looking at the possible features available from a platform’s machine learning services, it’s important for cloud customers to actually evaluate what those services are able to do, he said.

When asked whether Google ended up stronger than Microsoft when put to the test, Kundu said that he didn’t want to call one company weaker than the other, but that “the results were pretty strong in Google’s favor.” 

Those machine learning tools are important to Evernote, as the company looks to build on its existing popular note-taking app and create new intelligent applications. As more companies look to adopt machine learning tools, Google’s capabilities may help it close the gap with its competition in the public cloud market. 

“The proximity of strong developer services around data science is a key capability,” IDC analyst Al Hilwa said in an email. “Google has clearly invested greatly in this space in many ways because of it generates enormous amount of data and because of its investments in other AI initiatives over the years. What is new and interesting here is using these capabilities as a lever to attract customers like Evernote to its public cloud.”

What’s not clear yet is whether that lever will be successful over the long run in attracting companies to Google’s cloud instead of its competitors, especially since all of the major players in the cloud are constantly updating their capabilities. 

Evernote isn’t the only company using Google’s machine learning capabilities. Disney used the Cloud Vision API to build a marketing tool for its remake of “Pete’s Dragon,” while website building company Wix used the same tool to help its customers find images relating to the content of the site they’re working on. 

Actually getting Evernote moved over to Google’s cloud will be a massive task. The company has about 3 petabytes (a petabyte is 1,000TB) of data stored in its data centers. Evernote plans to begin migration in October, and hopes to have completed the move by the end of this year, Kundu said.

At the same time, the company is working on developing new capabilities that take advantage of Google’s cloud, both in terms of machine learning applications and cloud infrastructure in general. 

StorSimple Virtual Array – File Server or iSCSI Server?

The content below is taken from the original (StorSimple Virtual Array – File Server or iSCSI Server?), to continue reading please visit the site. Remember to respect the Author & Copyright.

StorSimple Virtual Array (SVA) can be configured as a File Server or as an iSCSI Server. Configured as a File Server, StorSimple Virtual Array provides the native shares which can be accessed by users to store their data. StorSimple Virtual Array configured as an iSCSI server provides volumes (LUNs), which can be mounted on an iSCSI initiator (typically a Windows Server). This blog post looks at the various requirements that should be considered when choosing a configuration of the StorSimple Virtual Array for a remote or branch office.

Architecture

 

Helsinki-FS

 

Helsinki-iSCSI

 

Requirement

StorSimple Virtual Array File Server

StorSimple Virtual Array iSCSI Server

Number of Shares

SVA file server supports a maximum of 16 shares

If the number of shares in the remote or branch office is larger than 16, we recommend using SVA iSCSI server

User self-restore

SVA file server allows the users to restore their data from previous five backups from .backups folder available in the share

An administrator must restore the cloud snapshot as a new volume and then restore data from the restored volume

Number of files in a share

SVA file server supports up to a maximum of 1 million files per share (maximum of 4 million files in total on the file server)

SVA iSCSI server works on the block level and does not have a limitation in terms of number of files

Maximum size of data

SVA file server supports a maximum share size of 2 TB for locally pinned shares and a maximum of 20 TB for tiered shares

SVA iSCSI server supports a maximum volume size of 500 GB for locally pinned shares and a maximum of 5 TB for tiered volumes

Failover time

SVA file server failover time is dependent on number of files in a share. During the failover, the directory structure is recreated and this may take additional time depending on the number of files in the share. To estimate the failover time, you can approximate the time as 20 minutes per 100,000 files. Hot data is downloaded in the background based on heat map

SVA iSCSI server provides instant failover (minutes to make the volume available). Only the metadata is downloaded during the failover and volume made available for use after the failover. Hot data is downloaded in the background based on the heat map

Active directory domains

SVA file server must be joined to an AD domain

SVA iSCSI server can be optionally joined to an AD domain, but it is not required. The iSCSI initiator may be joined to the domain or can be a part of the work group in non-active directory domain environments

File Server Resource Manager (FSRM) features (Quota, File blocking etc.)

SVA file Server does not support FSRM features

FSRM features can be enabled on the Windows server iSCSI initiator connected to a SVA iSCSI server

 

Useful Links:

StorSimple Virtual Array Overview

StorSimple Virtual Array Deployment Videos

StorSimple Virtual Array Best practices

pi-topRO – a proper portable RISC OS box

The content below is taken from the original (pi-topRO – a proper portable RISC OS box), to continue reading please visit the site. Remember to respect the Author & Copyright.

Finally, you can let your A4 cash in on its pension! Anyone familiar with the Raspberry Pi scene will be aware of the Pi-Top, a laptop computer based around the credit card-sized computer that was developed after a successful crowd-funding campaign on Indiegogo in 2014 – and anyone familiar with the RISC OS scene will […]

Down with idiotic disclaimer footers and dumb surveys!

The content below is taken from the original (Down with idiotic disclaimer footers and dumb surveys!), to continue reading please visit the site. Remember to respect the Author & Copyright.

How often do you receive an email message with a footer like this:

The contents are not to be disclosed to anyone other than the addressee. Unauthorised recipients must preserve this confidentiality and should please advise the sender immediately of any error in transmission.

This gem of pseudo-legal nonsense is from a message that I did not solicit and is the work of eDigitalResearch.com at the behest of ”The Expedia Customer Experience Team.” At some point, someone in either or both organizations must have thought this footer was necessary which is curious because the serious, weighty, highly private content in the message was a customer satisfaction survey addressed to me. Was this, you might be wondering, a very personalized survey with, perhaps, sensitive, personal data included? Nope, and here, at the very slight risk of their dogs of law snapping at me, is the message that had the above footer appended:

survey2

Pretty personal, eh? The survey was triggered by my recent and very disappointing stay at the DoubleTree Club by Hilton Orange County Airport. I posted my review to Yelp but unlike other reviews I’ve written, neither Hilton nor Expedia has made any attempt to address my concerns.

survey3

What was even more annoying about this survey from Expedia was that days before, minutes after check-in at the hotel, they’d sent me another survey by email! As with the majority of these surveys, there’s no benefit whatsoever to the person being surveyed even when it provides the opportunity to complain because you are hugely unlikely to ever get any response. You have to wonder why anyone ever bothers responding.

So, back to the footer … why the stern and forbidding legalese and does it have any real legal juice behind it? I posted about this to my favorite list and a list-friend, Karl Hakkarainen of Queens Lake Consulting, replied:

Usual disclaimer: I am not a lawyer, etc. … In the short bit of research that I’ve done, I’ve yet to find a case where blather of this type led to any kind of meaningful outcome (good or bad). There might be a few cases out there, but the signal to noise ratio is extremely low. … The lawyers I know who actually get stuff done don’t have this detritus in their messages.  

After a bit of research I discovered a 2013 post on the American Bar Association’s site, Do Email Disclaimers Really Work? in which the author, John Hutchins, comments:

There is virtually no scholarly analysis of the impact of email disclaimers and very little analysis by non-scholars. One of the few authors who have commented on the subject suggests that the misconception that email disclaimers have validity may arise from the mistaken belief that the Electronic Communications Privacy Act (ECPA) somehow applies … But, as this author points out, the ECPA prohibits only “intercepting” emails. Emails that arrive at their destination—even if the sender did not intend that destination—are not emails that have been intercepted in transit, and the ECPA is therefore inapplicable. In short, the ECPA is focused on the criminal intent of the interceptor, not the ability of the sender to execute his or her own intentions.

Interesting! But, confusingly, Hutchins adds:

I’m not suggesting that everyone who uses disclaimers at the end of emails immediately run out and order the IT staff to delete them. Although rare, there are circumstances where “everybody does it” is as good a reason to do something as any. Email disclaimers cost nothing, and you certainly won’t be in the minority if you use them.

So, what this advice amounts to is that legal footers are most likely completely pointless but, go on, continue to act like sheep. Great. ANother opinion, this time from the Economist’s post, Spare us the e-mail yada-yada:

E-mail disclaimers … are mostly, legally speaking, pointless. Lawyers and experts on internet policy say no court case has ever turned on the presence or absence of such an automatic e-mail footer in America, the most litigious of rich countries.

Many disclaimers are, in effect, seeking to impose a contractual obligation unilaterally, and thus are probably unenforceable. This is clear in Europe, where a directive from the European Commission tells the courts to strike out any unreasonable contractual obligation on a consumer if he has not freely negotiated it. 

So, as is common in many situations where the Internet runs full tilt into how we used to do business, the answer as to whether disclaimer footers are a good thing is a faint maybe, possibly, sort of yes, but, in reality, no, not at all; they’re just a waste of time. As for the endless surveys every time you buy something or stay in a hotel or do pretty much anything, they’re also a total waste of everybody’s time. So, if your organization is adding disclaimer footers or sending out pointless, stupid surveys please stop it and make our lives and our email just a little less annoying.

Comments? Thoughts? Connect to me or comment below then follow me on Twitter and Facebook.

Google DeepMind’s AI can mimic realistic human speech

The content below is taken from the original (Google DeepMind’s AI can mimic realistic human speech), to continue reading please visit the site. Remember to respect the Author & Copyright.

It’s still pretty easy to tell whether it’s a real person who’s talking or a text-to-speech program. But there might come a time when a robot could dupe you into thinking that you’re speaking with a real person, thanks to a new AI called WaveNet developed by Google’s DeepMind team. They have a pretty good track record when it comes to building neural networks — you probably know them as the folks who created AlphaGo, the AI that defeated one of the world’s best Go players.

Currently, developers use one of two methods to create speech programs. One involves using a large collection of words and speech fragments spoken by a single person, which makes sounds and intonations hard to manipulate. The other forms words electronically, depending on how they’re supposed to sound. That makes things easier to tweak, but the results sound much more robotic.

In order to build a speech program that actually sounds human, the team fed the neural network raw audio waveforms recorded from real human speakers. Waveforms are the visual representations of the shapes sounds take — those squiggly waves that squirm and dance to the beat in some media player displays. As such, WaveNet speaks by forming individual sound waves. (By the way, the AI also has a future in music. The team fed it classical piano pieces, and it came up with some interesting samples on its own.)

For instance, if used as a text-to-speech program, it transforms the text you type into a series of phonemes and syllables, which it then voices out. Subjects who took part in blind tests thought WaveNet’s results sounded more human than the other methods’. In the AI’s announcement post, DeepMind said it can "reduce the gap between the state of the art and human-level performance by over 50 percent" based on those English and Mandarin Chinese experiments. You don’t have to take the team’s word for it: We’re still far from using a WaveNet-powered app, but you can listen to some samples on DeepMind’s website.

Via: Bloomberg

Source: DeepMind, WaveNet (PDF)

Set up Windows 10 in Kiosk Mode using Assigned Access

The content below is taken from the original (Set up Windows 10 in Kiosk Mode using Assigned Access), to continue reading please visit the site. Remember to respect the Author & Copyright.

You can set up Windows 10 Pro, Windows 10 Enterprise and Windows 10 Education as a device in the Kiosk mode, to run a single Universal Windows app using the Assigned Access feature. This post shows how to do it.

Assigned Access feature is Windows 10

The Kiosk mode is useful if you want to create a lockdown environment and set up and display a Windows system in a general public area, and give access to any user to access and use any single app for a particular function – eg as an information kiosk or a kiosk for checking the weather, and so on.

For a kiosk device to run a Universal Windows app, we can use this Assigned Access feature. For a Windows 10 Enterprise or Education to run a classic Windows software, you need to use Shell Launcher to set a custom user interface as the shell.

When you used the Assigned Access feature, the user does not access to the desktop, Start Menu or any other part of the computer. He can only access and use a particular function.

Setup Windows 10 in Kiosk Mode using Assigned Access

Open Windows 10 Settings and select Accounts. Click on Family & other people on the left side to open the following settings.

windows-10-kiosk-mode-assigned-access

Scroll down and towards the end you will see a Set up assigned access link. Click on it to open the following window.

windows-10-kiosk-mode-assigned-access-2

Now you will have to Choose an account, under which you want to run the device in Kiosk mode.

Windows 10 Kiosk Mode Assigned Access

Having done this, you will have to next click on the Choose an app link and from the pop-up, select the Universal Windows app, to which you would like to give access to.

assigned-access

Restart your computer so that you sign out of all user accounts.

TIPS:

  1. To sign out of an assigned access account, since you may not have access to the Start Menu, you will have to use Ctrl+Alt+Del.
  2. To change the Universal app, click on the app (In our example, the Maps app) and select another app from the popup.
  3. To remove the account, select on Kiosk user account here and then select Don’t use Assigned Access from the pop-up which appears.

Secure Windows 10 Kiosk Mode

For a more secure kiosk experience, you want to make further configuration changes to the device:

  1. Open Settings > System > Tablet mode and choose On to put the device in Tablet mode.
  2. Go to Settings > Privacy > Camera, and turn off Let apps use my camera, to disable the camera.
  3. Go to Power Options > Choose what the power button does, change the setting to Do nothing, and then Save changes. This will disable the hardware power button.
  4. Go to Control Panel > Ease of Access > Ease of Access Center, and turn off all accessibility tools.
  5. Run GPEDIT and navigate to Computer Configuration > Windows Settings > Security Settings > Local Policies >Security Options > Shutdown: Allow system to be shut down without having to log on and select Disabled. This will remove the power button from the sign-in screen.
  6. Open the Group Policy Editor > Computer Configuration > Administrative Templates > System > Logon > Turn off app notifications on the lock screen.
  7. To disable removable media, in the Group Policy Editor, navigate to Computer Configuration > Administrative Templates > System > Device Installation > Device Installation Restrictions. Make suitable changes here, but ensure that you allow administrators to override Device Installation Restriction policies.

For more details on how you can configure a device running Windows 10 Pro, Windows 10 Enterprise, Windows 10 Education, Windows 10 Mobile, or Windows 10 Mobile Enterprise as a kiosk device, and further lock it down, visit this TechNet link.

FrontFace Lockdown Tool is a freeware that can help you protect Windows PCs that are used as public kiosk terminals.

Today’s supercomputers will get blown away by these systems

The content below is taken from the original (Today’s supercomputers will get blown away by these systems), to continue reading please visit the site. Remember to respect the Author & Copyright.

The Department of Energy says the $40 million it is investing in nearly two dozen multi-year projects will result in exascale computing systems that perform calculations on data 50 to 100 times faster than today’s most powerful supercomputers.

The DoE Exascale Computing Project says such high-performance computing systems can make at least a billion billion calculations per second, and will be used to process data for applications such as energy security, economic security, scientific discovery, healthcare and climate/environmental science. The U.S. is shooting to attain such powerful systems by the mid-2020s and China is aiming for 2020.

MORE: The 10 Most Powerful Supercomputers in the World

The project fits with President Obama’s overall National Strategic Computing Initiative, which is also being undertaken by the Department of Defense and the National Science Foundation.

Leading the Exascale Computing Project will be six DoE labs, including Argonne, Lawrence Berkeley, Oak Ridge, Los Alamos, Lawerence Livermore and Sandia.

A sampling of projects (15 fully funded, 7 with seed funding): 

*Exascale Deep Learning and Simulation Enabled Precision Medicine for Cancer

*Data Analytics at the Exascale for Free Electron Lasers

* Computing the Sky at Extreme Scales

MORE: 10 of today’s really cool network & IT research projects

Microsoft baits backup blimps with Azure upgrade

The content below is taken from the original (Microsoft baits backup blimps with Azure upgrade), to continue reading please visit the site. Remember to respect the Author & Copyright.

Backup and disaster recovery are brilliant applications for cloud and Microsoft now reckons it has made Azure an enterprise-grade contender, at least for workloads and data already in Azure.

Redmond’s revealed that Azure Backup’s had an upgrade that delivers the following:

  • Ability to view all backup policies in a Recovery Services vault from a single window
  • Ability to add a new policy from policy list view
  • Ability to edit a backup policy to match modified backup schedule and retention requirements – once a backup policy is updated, changes are pushed automatically to all virtual machines configured with the policy
  • Ability to add items to a backup policy – add more virtual machines to an existing backup policy in a single click
  • Get a view of virtual machines protected with a policy
  • Delete a backup policy which is no longer in use

There’s nothing startling on that list, but there is some nice automation and sensible enhancements that if applied thoughtfully should protect Azure VMs rather nicely.

And enterprise grade? There’s a bit of “any colour you want so long as it is black” in that claim, seeing as Azure Backup only backs up Azure. At the very least it’s a very useful set of new features that makes Azure more robust while highlighting how the cloud can be a rather simpler environment than the complication that is on-premises storage held across storage appliances that may or not have been made with backup in mind. But there’s still plenty of room for the likes of Veeam, CommVault and their ilk to point out how they can add value and nuance. ®

Sponsored:
Boost business agility and insight with flash storage for analytics

Still got a floppy drive? Here’s a solution for when 1.44MB isn’t enough

The content below is taken from the original (Still got a floppy drive? Here’s a solution for when 1.44MB isn’t enough), to continue reading please visit the site. Remember to respect the Author & Copyright.

Floppy disk sales have, well, flopped but there are still masses of PCs and old embedded PC-based systems out there with floppy disk slots and drives. Now this near-dead space can be made usable again, with a 32GB FLOPPYFlash drive from Solid State Disks Ltd.

It’s a drop-in replacement for a floppy disk drive and takes CompactFlash solid state cards instead of floppy disk media. The firmware is field-upgradable via an included USB port.

The device has a 3.5-inch footprint and uses a standard 34 pin floppy disk drive connection, needing a 5V power supply. It also supports 26 pin / 34 pin slim and Shugart floppy connections.

Data transfer rates can be set between 125 Kbits and 500 Kbit/s depending on whether the matching encoding method is FM, MFM or MMFM. The emulated track configuration is programmable.

TCP/IP networking via standard RJ45 Ethernet connection is also supported, allowing FLOPPYFlash to be connected to any existing local area network for remote configuration, control, diagnostics, backup and restore.

FloppyFlash_drive_50

FLOPPYFlash drive and CompactFlash media

FLOPPYFlash is also available as a network drive upgrade with IP address set up at the factory. It ships with Solid State Disks’ FLASH2GUI graphical user interface used for control and configuration of the drive as well as backup and restore operations.

FloppyFlash_GUI

FLOPPYFlash network drive GUI

SSD tells us: "The use of solid state technology also delivers greatly increased reliability (MTBF) and media life, improved environmental efficiency with lower power consumption, noise and heat generation, and a reduction in unplanned downtime."

If you still have a PC with a floppy disk drive installed then (1) congratulations, and (2) we’d be surprised if you care much about improved environmental efficiency with lower power consumption, noise and heat generation, although a reduction in downtime and improved media reliability probably do matter a lot.

Setting irony aside, this use of flash drive media in a floppy disk drive bay looks a great little idea. ®

* There have long been rumours that certain launch codes are stored on a floppy….

Sponsored:
Flash storage buyer’s guide

Windows 10 Enterprise E3 Now Available, E5 Coming October 1st For $14 Per User

The content below is taken from the original (Windows 10 Enterprise E3 Now Available, E5 Coming October 1st For $14 Per User), to continue reading please visit the site. Remember to respect the Author & Copyright.

Windows 10 Hero Good

Windows 10 Hero Good

Earlier this year, Microsoft announced a new Windows subscription service for enterprise clients that would offer Windows 10 Enterprise starting at $7 a month. The SKU is offered by Cloud Solution Providers but for most companies, the E5 offering is what most will want but that option will not launch until October 1st.

For larger companies, the E5 version of Windows 10 Enterprise is the more compelling offer as it includes Windows Defender Advanced Threat Protection that will help mitigate attacks on a corporate network and combat the rising threat of ransomware. As for a price, Microsoft is setting a $14 estimated retail price point for E5 but the price could vary by Cloud Service Provider depending on features included with the SKU.

One of the new features in Windows 10 is that you can upgrade to Windows 10 Pro to Windows 10 Enterprise E3 without rebooting. For those looking to upgrade to Windows 10 Enterprise without the subscription fee, that option is still available through the tradition licensing channels.

Moving Windows 10 to a subscription model for the Enterprise is a change that has been expected ever since Office 365 was announced. The company, at this time, has not made any indication if they will attempt to move the consumer version of Windows to a subscription model; I don’t see that happening in the near term but it is always a possibility.

The post Windows 10 Enterprise E3 Now Available, E5 Coming October 1st For $14 Per User appeared first on Petri.

Best practices for incident response in the age of cloud

The content below is taken from the original (Best practices for incident response in the age of cloud), to continue reading please visit the site. Remember to respect the Author & Copyright.

This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.

Most CISOs receive a rude awakening when they encounter their first major security issue in the cloud. If they identify a critical vulnerability that requires a patch, they may not have the authorization to tweak the cloud provider’s pre-packaged stack. And if the customer does not own the network, there may not be a way to access details that are critical to investigating an incident.

In order to avoid a major security issue in the cloud, CISO’s must  have an incident response plan.  Here is how to build one:

1.  Establish a joint response plan with the cloud provider. If you have not yet moved to the cloud, the most practical first step is to establish a joint response process. Responsibilities and roles should be clearly defined, and contact information for primary and secondary contacts should be exchanged. Obtain a detailed explanation of what triggers the provider’s incident response and how the provider will manage different issues.

2.  Evaluate the monitoring controls and security measures that are in place in the cloud.For an effective response on security issues related to cloud infrastructure, it is important to understand what kind of monitoring and security measures are in place by the cloud provider and what access you have to those tools. If you find they are insufficient, look for ways you can deploy a supplemental fix.

3.  Build a recovery plan. Decide whether recovery will be necessary in the event of a provider outage. Create a recovery plan that defines whether to use an alternate provider or internal assets as well as a procedure to collect and move data.

4. Evaluate forensic tools for cloud infrastructure. Find out what tools are available from the cloud provider or from other sources for conducting forensics in case of an incident. If the incident involves PII information, it might turn into a legal and compliance challenge, so having appropriate tools which can help with forensics and evidence tracking is essential.

Handling an incident in the cloud

Many incident response steps are similar whether you are dealing with the cloud or a local installation. However, there are some additional steps you may need to take in the case of a cloud incident:

  • Contact your provider’s incident response team immediately, and be aggressive in your communications. If the provider’s team cannot be reached, do everything you can on your end to contain the incident, like controlling connections to cloud service and revoking user access to the cloud service in questions.
  • If the incident cannot be controlled or contained, prepare to move to an alternate service or set up an internal server.
  • The cloud allows you to delay identification and eradication until the crisis has passed. In most cases, you can proceed immediately to restore production services by instantiating a new instance.

Best practices for incident response in the cloud

One critical issue that many enterprises face is the lack of talent possessing the proper skills to manage security. It is difficult to find the right candidates, and if you locate them, you can expect to have ato pay top salaries. By the end of 2024, the Bureau of Labor Statistics expects information security analyst jobs to grow 18%, and salaries are already averaging well into six figures.

However, there are some steps that you can take to bring new employees up to speed quickly or enhance the skills of existing employees:

  • Promote collaboration to help junior analysts benefit from the experience of senior analysts. As a bonus, collaborative efforts may reveal duplicate efforts that can be eliminated.
  • Create playbooks that prescribe standard procedures for responding to incidents. Naturally, you cannot create a guide for every potential situation, but playbooks can be valuable guides and excellent training materials. Just remember to keep playbooks updated, which is a task that can often be automated.
  • Speaking of automation, many tasks can be automated, especially if they are repetitive and routine. Mundane tasks take up an unjustifiable amount of time. Automation can free your staff members for more important tasks.
  • Foster situational awareness from both the historical and real-time points of view. An effective analysis of past incidents can help you make better decisions about current incidents.
  • Analyze incidents and create a database to help determine the types of problems encountered, the skills needed to address the issue, the frequency of each type of incident, and other facts. Analysis can help you identify vulnerabilities and determine where to bolster security.

Like most security best practices related to cloud applications, incident response is also a shared responsibility. Planning ahead for incident response is critical to make sure you have the right contacts, tools and processes in place. Having an incident response platform that can enable collaboration for internal and external teams, track incident response processes and automate key security tasks, is essential in the time of crisis to contain issues quickly and respond effectively.

For more information, visit www.Demisto.com.

Brave browser lets you tip your favorite sites in bitcoin

The content below is taken from the original (Brave browser lets you tip your favorite sites in bitcoin), to continue reading please visit the site. Remember to respect the Author & Copyright.

Brave, the browser meant to block ads and trackers by design, has launched what might be a fairly controversial service: the ability to tip websites via bitcoin.

Now, after attaching your bitcoin wallet to the Brave browser, you can either tip sites manually or do so via automated installments meant to be sent regularly. They’re totally anonymous, according to Brave, and the company states that neither it or anyone else will be able to connect page views with these small tips. Publishers will need to authenticate their identities before claiming their "tips."

Certain publishers haven’t been so keen on Brave’s initiative to keep the browser free of ads, so this is likely Brave’s way of trying to curry favor with those that feel as though they were burned by its mission statement.

Brave users previously had the option of paying for an ad-free experience (blocking Brave’s smaller ads) if they desired by pulling funds from donations or their bitcoin wallet. This is an interesting step forward for the browser that seems to be an attempt to smooth things over on both the publisher- and customer-facing fronts.

If you’re interesting in trying out Brave, you can get it here.

Via: Ars Technica UK

HDMI hooks up with USB-C in cables that reverse, one way

The content below is taken from the original (HDMI hooks up with USB-C in cables that reverse, one way), to continue reading please visit the site. Remember to respect the Author & Copyright.

HDMI Licensing, the administrator of the High-Definition Multimedia Interface (HDMI) spec, has decided that the time has come to do away with dongles and given the thumb’s up to USB-C.

“The HDMI cable will utilize the USB Type-C connector on the source side and any HDMI connector on the display side,” HDMI licensing says. “Unlike the other Alt Mode display technologies which require various adapters or dongles to connect to HDMI displays, HDMI Alt Mode enables an easy connection via a simple USB Type-C to HDMI cable.”

USB-C to HDMI cable

Only one side of this cable can be reversed. So good luck with that

HDMI Licensing says it’s made the decision to hook up with USB-C because gazillions of devices will use it any month now, so it makes sense to give the people what they want and just let their new kit plug straight into tellies and monitors rather than making them pfaff about with dongles.

The new cables will support HDMI 1.4b features such as resolutions up to 4K, Audio Return Channel, 3D, HDMI Ethernet Channel, and Consumer Electronic Control.

All of which is good news for punters and handy for business device-buyers who can now plan on HDMI as a pretty sensible display standard. The news may be less well-received by Intel, which jumped through a lot of hoops to make its own Thunderbolt standard USB-C compatible but kept its own distinctive connector. ®

Sponsored:
Application managers: What’s keeping you up at night?

Ubuntu 16.04 kisses the cloud, disses the desktop

The content below is taken from the original (Ubuntu 16.04 kisses the cloud, disses the desktop), to continue reading please visit the site. Remember to respect the Author & Copyright.

With Ubuntu 16.04LTS (Xenial Xerus), Canonical has introduced incremental improvements to the popular server and cloud versions of its operating system, but if you were looking for exciting changes to desktop Ubuntu, this version isn’t it.

The 16.04 release is an iterative, not necessarily massive improvement. But this is an Long Term Service (LTS) version, which means that there’s a team working on keeping it solid for five years. So, into the next decade, 16.04 gets patched and fixed, as other versions continue to be released on a regular basis.

In this new release, Ubuntu further strays from the RedHat/SUSE/CentOS/Oracle school of software packaging by officially supporting an important new tool: Snap, a package manager.

Ubuntu is based on Debian, and any Ubuntu/Debian/LinuxMint user has finger memory to use `apt-get’ and `wget’ to obtain and install software. Canonical would rather you source through them, using Snap.

Snap can be used alongside the current Debian-friendly software package updating processes. It’s important to note that in order to install packages wrapped by Snap you’ll need an Ubuntu One account.

+ MORE ON NETWORK WORLD Mark Shuttleworth: ‘Ubuntu keeps GNU/Linux relevant’ +

You might remember when Ubuntu One had personal repository features that were closed, but as a store, Ubuntu One never closed and that’s where Snaps will be examined and downloaded. We tried it and decided that it’s an improvement worth using, if you can tolerate the sense of being tracked.

The second major change is that Ubuntu now allows its installations to use supported ZFS and Ceph filing systems.

Invented by Sun prior to its acquisition by Oracle, ZFS was, at the time, a visionary replacement to a number of journaling filing systems, like Andrews Filing System, NTFS. It was thought to be a potential replacement for the ext3, ext4, and Reiser filing systems.

The idea was to prevent a number of maladies that caused systems failures, like power-interruptions during read/write processes, disk errors, storage cache errors, timing issues, and other disk and storage maladies.

While Apple was rumored to be implementing ZFS, they didn’t, and the FreeBSD community picked up ZFS — the OpenSolaris version — and used the port successfully in many NAS versions. A Linux port of ZFS followed the BSD port, and has been around for a while. The inclusion into Ubuntu gives it a seal of approval.

Cloud considerations

Ceph is a filing system of a different feather. Its first stable release coincides with the release of Ubuntu 16.04. What makes it different is that Ceph is not unlike software-based Reduced Array of Inexpensive Disks (RAID) as a software service and filing system.

Ceph can store files and folders in the traditional desktop way, but Ceph can also store as a block device (e.g. arbitrary or standardized chunks of data) or data as objects — all of which are compatible with cloud computing.

NET RESULTS

COMPANY CANONICAL
PRODUCT Ubuntu 16.04 Cloud Server and Desktop Editions
PRICE FREE
PROS Strong server/cloud edition fleet/cluster advances; widely varied processor support
CONS Mir still absent, no big desktop changes

Ceph is a well-known member of various OpenStack cloud deployments, and Canonical’s deployment of OpenStack in turn, becomes a part of their services structure, somewhat rubbing competitively against initiatives by Red Hat.

Canonical also offers Autopilot on its website, an OpenStack control mechanism with a free trial for up to 10 servers and $750 per server per year after that. Conceptually, this allows an analyst or developer to build the control mechanisms on VMware vSphere, using MaaS (Ubuntu’s Metal-as-a-Service server/instance distribution system) with OpenStack software devices/services, such as networking, storage, KVM compute in various configurations.

This system is very simple to test for those familiar with VMware and or Linux Containers (LXC) and gives a flavor for OpenStack control to novices. It was trivial for us to deploy.

Canonical will also build and deploy an OpenStack network via their BootStack service for just $15 per server per month including support.

In a bare metal install, we were able to add Windows Server 2012 R2 Data Center Edition as an OpenStack-managed host, although recipes for this sort of hypervising aren’t easy to find. The proof of concept worked.

What this speaks to is the sheer automation speed of compute, storage, and network build-out — and potentially low cost — of OpenStack managed resources. At this writing, Ubuntu leads in both AWS instances, and also those deployed with OpenStack overall as client instances.

Desktop takes a back seat

Ubuntu’s Unity desktop is the same old stuff, although it’s grown to a 1.49GB download. We found it inconvenient that the long list of download sources are not optimized for speed; we found Tor sources to be the fastest, and the Tor sources checksummed correctly.

The Mir display server underpinnings designed to become a unified replacement for Unity’s XWindows substrate is, well, promised again for 16.10, due perhaps later this year. This apple of a user interface in Canonical’s eye has taken longer to bring to all devices than was originally planned. Although there are many under-the-hood changes, we find none particularly notable.

+ PAST REVIEW: Canonical continues cloud push with Ubuntu 15.04 +

Things that make us growl: The Desktop is missing Mir. It’s also missing initial hard password enforcement. You can still boot to a desktop that’s free and open with no password. There is no install-time SAML authentication or secondary or OAuth provider to immediately proxy users safely to an SSO provider, although there is an http proxy available at installation.

Tablet versions of Ubuntu are rarer to find. BQ, an integrator/OEM of Ubuntu on tablets sells them mainly in the EU market. We were unable to put our hands on one for purposes of this review. The same problem exists for Ubuntu Phone.

Cloud images for download were found in multiple packaging formats for 16.04, if you like i386 and x64 processor families, and most do. These images are ready to download, or can be spun up in numerous clouds.

SCORECARD

Features 4.5
Performance/Security 3.5
Manageability/administration 4
Usability/Docs 4
Overall 4

Server images have a comparatively greater choice of processor families: x86, x86-64, ARM v7, ARM64, POWER8 and IBM s390x (LinuxONE) are available for download.

What does it all mean?

Canonical has taken Ubuntu to a very strong position, and now incrementally adds features that end up sanctioning — through its support — features that others might have not considered to be ready for production use.

ZFS and Ceph are sophisticated filing systems, and they take up more CPU for casual use, and offer broadly sophisticated architectural possibilities, now also joined to OpenStack and Ubuntu’s JuJu advocacy as rapid platform construction tools.

JuJu construction kits are now available for a number of systems platform, ranging from simple WordPress to big/fat data analysis kits. While these don’t necessarily rival Docker container farms, they can almost as easily be used as Docker, rkt, and other management substrates in prototyping.

For now, using AutoPilot and OpenStack is the supported choice, and Canonical takes the nexus of making things work upon themselves, just as Red Hat must make its supported kit work. We expect Ubuntu’s popularity to continue unabated, but there isn’t anything magnificent in the 16.04 release, except continuing breadth with depth.

How We Tested Ubuntu 16.04

We downloaded and used Ubuntu 16.04 x64 cloud, server, and desktop versions, then deployed them mostly as VMs, excepting two tests. We also tested 16.04 on AWS. The cloud versions were used on local hardware using AutoPilot, Juju, and OpenStack in the lab on an HP Microserver Gen8 (i3 chipset, 8GB of memory, 1.5TB of disk, twin Gigabit Ethernet ports) testing Ubuntu and Windows 2012 R2 as VM instances.

We also deployed 16.04 server and desktop on the following platforms in our network operations center: VMware ESXi 5.1, 5.5, and 6.0, as well as Hyper-V 3.1, and XenServer 6.5.0. Additionally, we installed 16.04 desktop and server versions as virtual machines under Parallels for Mac 11.

We noted that little has changed from a deployment context from the last LTS version, 14.04, and scripts for 14.04 work with 16.04 in terms of PxE and Ubuntu’s Metal-as-a-Service — now enhanced by the OpenStack management plane. The versions are interchangeable for one-off install scripts, such as PxE loads.

The NOC is powered by HP Gen8 and Gen9 servers, a Lenovo multi-core server, and a Dell 730xd multicore server, all connected via Extreme Summit Series 10GBE switches to Expedient Data Center’s core network in Indianapolis.

CloudAcademy Announces Free Webinars and Online Courses for Cloud Computing Developers

The content below is taken from the original (CloudAcademy Announces Free Webinars and Online Courses for Cloud Computing Developers), to continue reading please visit the site. Remember to respect the Author & Copyright.

Anyone who wants to learn more about the basics of cloud computing, as well as more advanced cloud developers and programmers who want to hone their skills, can take advantage of new, entirely free content from Cloud Academy (www.cloudacademy.com), the leading provider of continuous training and certification for Amazon Web Services, Microsoft Azure and Google Cloud Platform.

Basics Course: What is Cloud Computing
For beginners, Cloud Academy has launched a new, totally free introductory course entitled "What is Cloud Computing." It provides a comprehensive overview of the cloud computing landscape and is a place to start for those thinking about adding cloud programming to their skillset. It is intended for anyone who wants to learn about cloud computing, but may have little or no experience.

Course objectives include:

  • A clear definition of what cloud computing is
  • A comprehensive understanding of cloud computing
  • An understanding of cloud computing benefits and key concepts
  • An understanding on when and where to use it using the appropriate industry models.

Those interested simply sign up and then download the course to cover at their individual pace.

Advanced Webinars                                 
For more experienced developers, Cloud Academy announces two new free webinars covering Docker and Google Container Engine, and Amazon Web Services:

  • Newly added to Cloud Academy’s video library is a webinar about how to create a containerized application with Docker and deploying it to Google Container Engine. It includes step-by-step instructions, complete with screen captures, and is designed for developers who are new to Docker or Container Engine but have knowledge of and experience with cloud development.
  • On August 31, 2016, from 1:00-2:00 PM EDT, Cloud Academy will hold a live webinar called "AWS Lambda: Advanced Coding Session." This will be a more advanced session related during which Alexa Skills (Amazon Echo), Kinesis, IoT, Cognito Sync, and CloudFormation will be covered. Simply sign up here.

Security an afterthought in connected home, wearable devices

The content below is taken from the original (Security an afterthought in connected home, wearable devices), to continue reading please visit the site. Remember to respect the Author & Copyright.

Based on an extensive review of publicly reported internet of things (IoT) device vulnerabilities, the Online Trust Alliance (OTA) today announced that all of the problems could have been easily avoided.

“In this rush to bring connected devices to market, security and privacy is often being overlooked,” Craig Spiezle, executive director and president of the OTA, said in a statement today. “If businesses do not make a systematic change, we risk seeing the weaponization of these devices and an erosion of consumer confidence impacting the IoT industry on a whole due to their security and privacy shortcomings.”

If only they had listened …

The OTA, a nonprofit group comprised of academics and representatives from the public and private sector, is dedicated to developing and advocating best practices and policy concerning security and privacy. Researchers from the OTA recently analyzed publicly reported vulnerabilities for consumer connected home and wearable technology products from November 2015 through July 2016. They found that in each case, if device manufacturers and developers had implemented the security and privacy principles outlined in the OTA IoT Trust Framework, the vulnerabilities would not have occurred.

[ Related: Connected medical device makers need to step up security ]

“Security starts from product development through launch and beyond, but during our observations we found that an alarming number of IoT devices failed to anticipate the need of ongoing product support,” Spiezle said. “Devices with inadequate security patching systems further opens the door to threats impacting the safety of consumers and businesses alike.”

Most glaring security flaws

OTA revealed its findings today at the American Bar Association’s 2016 Business Law Section Annual meeting in Boston.

OTA said the most glaring failures it found were attributed to the following causes:

  • Insecure credential management, including making administrative controls open and discoverable
  • Not adequately and accurately disclosing consumer data collection and sharing policies and practices
  • The omission or lack of rigorous security testing throughout the development process, including but not limited to penetration testing and threat modeling
  • The lack of a discoverable process or capability to responsibly report observed vulnerabilities
  • Insecure or no network pairing control options (device to device or device to networks)
  • Not testing for common code injection exploits
  • The lack of transport security and encrypted storage including unencrypted data transmission of personal and sensitive information including but not limited to user ID and passwords
  • Lacking a sustainable and supportable plan to address vulnerabilities through the product lifecycle, including the lack of software/firmware update capabilities and/or insecure and untested security patches/updates

“The Online Trust Alliance’s IoT Trust Framework includes valuable principles that companies should embrace to make sure consumer smart home technology is secure, private and sustainable for the future,” Tom Salomone, president of the National Association of Realtors (NAR) and broker-owner of Real Estate II in Coral Springs, Fla., said in a statement today. “Device vulnerabilities need to be understood and addressed in order to protect what is near and dear to anyone using smart and connected device technology in their home.”

[ Related: White-hat hackers key to securing connected cars ]

The OTA’s Trust IoT Framework is a global, multi-stakeholder effort to address IoT risks comprehensively. The OTA began developing the framework in February 2015 based on the feedback of nearly 100 organizations, including ADT, American Greetings, Device Authority, Malwarebytes, Microsoft, NAR, Symantec, consumer and privacy advocates, international testing organizations, academic institutions and U.S. government and law enforcement agencies. The framework includes a baseline of 31 measurable principles that OTA says device manufacturers, developers and policy makers should follow to maximize the security and privacy of the devices and data collected for smart homes and wearable technologies.

This story, “Security an afterthought in connected home, wearable devices” was originally published by

CIO.

Buddha was a data scientist

The content below is taken from the original (Buddha was a data scientist), to continue reading please visit the site. Remember to respect the Author & Copyright.

Editor’s note: Tara Cottrell is a writer and digital strategist, and works as the web content manager at Stanford University’s Graduate School of Business. Dan Zigmond is a writer, data scientist and Zen priest, and is director of analytics for Facebook. They are the authors of the book Buddha’s Diet.

More than two millennia ago, wandering the footpaths of ancient India, preaching in village huts and forest glens, Buddha was biohacking his health. He tried holding his breath so long his ears exploded, and even the gods assumed he was dead. (He wasn’t.) He then tried extreme fasting, reducing down his daily meals until he was living on just a few drops of soup each day. He got so thin his arms looked like withered branches and the skin of his belly rested on his spine.

Buddha was trying to do what we’re all trying to do on some level — improve ourselves and stop suffering so much, sometimes by employing pretty far-fetched techniques. But in the end, he rejected all these crazy extremes — not because they were too hard, but because they just didn’t work.

Buddha believed in data. Every time he tried something new, he paid attention. He collected evidence. He figured out what worked and what didn’t. And if something didn’t work, he rejected it and moved on. A good scientist knows when to quit.

When Buddha started teaching, he advised his students to do the same. He didn’t ask anyone to take his instructions on faith. He explained that the way most other teachers insisted you believe everything they said was like following a procession of blind men: “the first one does not see, the middle one does not see, and the last one does not see.” Buddha didn’t want us to trust — he wanted us to see. Our beliefs should be based on data.

He applied this same thinking to food. Most religions include some sort of dietary restriction: Islam prohibits pork. Orthodox Jews refrain from mixing milk and meat. Catholics avoid certain foods during Lent. Some devout Hindus won’t eat anything stale or overripe or with the wrong  flavors or texture. Usually these rules are presented as divine commandments. Asking why we should eat this way is beside the point. There isn’t necessarily a reason for the commandment — the commandment is the reason.

Buddha took a different approach: His rules were grounded in his own experience. Like a lot of us, he tried some crazy diets. But what worked for him was very simple. He gave little advice about what his monks should eat, but he was very particular about when they should eat it. His followers were basically free to eat anything they were given — even meat — but only between the hours of dawn and noon.

Like any good data scientist, Buddha learned to ignore the outliers.

Buddha didn’t give a mystical or supernatural explanation for this odd time restriction. But he was pretty sure it would improve their health. He had tested it on himself. “Because I avoid eating in the evening, I am in good health, light, energetic, and live comfortably,” he explained. “You, too, monks, avoid eating in the evening, and you will have good health.”

If Buddha were alive today, he’d be surprised to see so many Silicon Valley techies and Brooklyn hipsters embracing intermittent fasting as a new craze. But he’d be gratified to see the evidence mounting for the health benefits he claimed for time-restricted eating. We now have numerous scientific studies confirming the original data Buddha collected.

In 2014, for example, Dr. Satchidananda Panda and his team of researchers at the prestigious Salk Institute for Biological Sciences outside San Diego published a study on obesity in mice. They took one group of mice and instead of their normal food, offered them a diet of high-fat, high-calorie foods — and let them eat as much as they wanted. The results would surprise no one: The mice got fat.

Then they took another group of mice and offered them exactly the same seemingly unhealthy diet, but this time they only let the mice eat for nine to 12 hours each day. During the rest of the day and at night, the mice got only water. In other words, these mice had the same all-you-can-eat buffet of tasty, fattening treats. The one rule was that they could only stuff themselves during some of their waking hours.

This time, the results were a surprise: None of these mice got fat. Something about matching their eating to their natural circadian rhythms seemed to protect the mice against all that otherwise fattening food. It didn’t matter if they loaded up with sugars and fats and other junk. It didn’t seem to matter what the mice ate, or even how much of it, only when they ate it.

In other words, the data backed up Buddha.

Other scientists have produced similar results. Dr. Panda’s team even tried fattening up the mice by starting them on that first any-time diet, and then switched them to the time-restricted version. These mice didn’t just stop gaining — they started to lose that excess weight.

Every day becomes a balance, with a time for eating and a time for fasting.

And it doesn’t stop with mice. Researchers have asked men and women to restrict their eating to certain hours each day, and those people lose weight, too.

Some of the best researchers studying food and health have been confirming Buddha’s original rules. Whether you call it intermittent fasting or time-restricted eating, Buddha’s ancient biohacking wasn’t an anomaly. The data he collected on himself has now been replicated by countless others.

Like any good data scientist, Buddha learned to ignore the outliers. He realized early on that the truth is rarely found in the extremes. He practiced instead the “middle way,” a philosophy of perpetual compromise and moderation. Modern time-restricted diets follow this same sane path — not quite dieting, but not quite eating anything any time either . Every day becomes a balance, with a time for eating and a time for fasting.

These days, we can all do what Buddha did: Become your body’s own data scientist; observe yourself as you eat to see what works for you and what doesn’t. We weren’t designed to eat at all hours, an unfortunate luxury we have with all the cheap and readily available food in first-world countries. Buddha discovered this long ago. Now we know it too.

Featured Image: PieraTammaroPhotoart/Getty Images

Microsoft’s Cloud in Brexit Britain — New Azure and Office 365 DCs for UK

The content below is taken from the original (Microsoft’s Cloud in Brexit Britain — New Azure and Office 365 DCs for UK), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft Azure in UK

Microsoft Azure in UK

James Bond’s favorite ride now runs on Microsoft’s Great British cloud

Microsoft Azure and Office 365 hosted in the UK for the first time. Redmond just opened up three new data centers in England and Wales, arranged in two Azure regions.

This brings the number of Regions to 28 globally. Among the launch customers are Aston Martin, a regional health authority, and the Ministry of Defence [sic].

This should give a boost to Microsoft’s data-sovereignty story. In today’s IT Newspro, we’re shaken, not stirred.

Your humble newswatcher curated these news nuggets for your entertainment. Not to mention: McQ fail

What’s the craic? Aunty Beeb speaks peace unto nation: [You’re fired -Ed.]

Its first UK cloud computing data centresare located in London, Durham and Cardiff.Microsoft first revealed its plan to set up UK-based data centres last November.

The UK data centresmean Microsoft is able to offer [Azure] without sending data overseas.Microsoft is, however, [battling] the US government, which believes it has the right to force [it] to surrender dataheld overseas.

Amazon is set to open its own rival UK data centresbut is yet to confirm when.Its AWS cloud computing division remains more popular than Azure.

Tell me more. Frederic Lardinois renders Microsoft opens its UK data center region:

The new regionoffers support forAzure cloud services and Office 365, withDynamics CRM Online slated for the first half of 2017.Azure now offers28 regions and ithas plans [for] six more.

Data sovereignty is becoming an increasingly important issue.Microsoft stressed that these new regions are compliant with the ISO 27018 standard for cloud privacy [and] with the newPrivacy Shield framework, the replacement forSafe Harbor.

In fact, it’s two new regions. Or so says Liam Tung—Microsoft’s two new cloud regions tackle data privacy:

Microsoft has officially opened two new cloud regionsUK West and UK South.The new regionsof course also will be able to serve London’s massive financial services industry.

Microsoft today highlighted its recent victoryquashing a warrant for access to email storedin Ireland.[It also has] two new datacenters in Germany slated for launchoperated by ‘data trustee’T-Systems. Under this arrangementany government request for such data will need to go through T-Systems.

But how did we get here? Louis Columbus sailed the ocean blue picks Seven Ways Microsoft Redefined Azure For The Enterprise:

Microsoft Azure has achieved 100%revenue growth and now has the 2nd largest market share.AWS and Microsoft Azure have proven their abilityand are the two most-evaluated cloud platforms.Of the two, Microsoft Azure is gaining momentum in the enterprise.

Only Microsoft is coming at selling Cloud Servicesfrom the standpoint of how they can help do what senior managementwant most.Azure is winning [because of its] support for legacy Microsoft architectures that enterprises standardizedon years before.Azure is also acceleratingdue to the pervasive adoptionof Office365.

From a leading telecom providerlooking to scale throughout Asia to financial services firmslooking to address Brexit issuesnearly every enterpriseroadmap is based on global scalability and regional requirements.Microsoft has 108 data centers globally.

Sponsored

So what does it all mean? David “wyrdfish” Watson means Brexit:

The UK will not be under EU data protection law [but] most EU business will have a requirement for that.Data hosted in the UK will have to move [probably] to Ireland.On the other hand [UK] data-protection laws could be made more attractivemeaning microsoft could have just made a shrewd move.

But surely Brexit doesn’t automagically invalidate existing laws? ニヤは猫じゃない purrs, contentedly:

The [UK Data Protection Act] was enacted as a result of an EU directive.Surely we’re under EU DP law regardless?

So it’s all about the Brexit, baby? Officially, no. Peter Gothard sounds disappointed he can’t use that angle—Brexit vote had ‘no impact’:

Microsoft’s UK COO [said it] had “no impact on the decision” to open UK data centres.Nicola Hodson said that Microsoft is looking to “upgrade the digital fabric in the UK.”

“We have a set of principlesaround security, privacy, compliance, transparency and availability.We’ll just have to wait and see how [Brexit] unfolds,” she said..

OK, anything else we should know? Cliff Saran wraps it up:

The Ministry of Defence (MoD) [is] among the first organisations tohost its infrastructure in Microsoft’s UK cloud. [But] Microsoft will be running a private instance of Azure for the MoD.The MoD will be the anchor tenant.

Buffer Overflow

More great links from Petri, IT Unity, Thurrott and abroad:

Sponsored

And Finally

McQ Fail! [warning: Contains scenes of badly-poured beer]

Main image credit: Sony/MGM/Columbia/Eon/Danjaq/B24

The post Microsoft’s Cloud in Brexit Britain — New Azure and Office 365 DCs for UK appeared first on Petri.

DigitalOcean Introduces Hatch to Support the Next Generation of Startups

The content below is taken from the original (DigitalOcean Introduces Hatch to Support the Next Generation of Startups), to continue reading please visit the site. Remember to respect the Author & Copyright.

DigitalOcean, the cloud for developers, today announced Hatch, a global incubator program designed to support the next generation of startups as they Read more at VMblog.com.

HyperGrid Announces Major Enhancements to HyperForm, Industry’s Leading Container-as-a-Service Platform for DevOps

The content below is taken from the original (HyperGrid Announces Major Enhancements to HyperForm, Industry’s Leading Container-as-a-Service Platform for DevOps), to continue reading please visit the site. Remember to respect the Author & Copyright.

HyperGrid , the pioneer in creating and delivering the world’s first and only container based, application aware HyperConverged… Read more at VMblog.com.

PiBakery Dramatically Simplifies Setting Up the Raspberry Pi

The content below is taken from the original (PiBakery Dramatically Simplifies Setting Up the Raspberry Pi), to continue reading please visit the site. Remember to respect the Author & Copyright.

Windows/Mac: Installing and setting up a vanilla version of the Raspberry Pi’s main operating system, Raspbian, is easy enough
. If you want to do more with it, like set up custom software to run on boot, or connect to a Wi-Fi network, it’s a bit of a pain. PiBakery simplifies all that dramatically.

Read more…

Microsoft Azure: Use Visual Studio to Deploy a Virtual Machine

The content below is taken from the original (Microsoft Azure: Use Visual Studio to Deploy a Virtual Machine), to continue reading please visit the site. Remember to respect the Author & Copyright.

Azure Hero Server

Azure Hero Server

In today’s Ask the Admin, I’ll show you how to create an ARM template in Visual Studio for deploying Azure virtual machines (VMs).

As a system administrator, I never thought I’d need to work with Visual Studio (VS) – that scary monolithic piece of software the developers use to conjure up their wares. Tools such as the Windows PowerShell ISE (Integrated Scripting Environment) have, until now, sufficed for my scripting needs. But since Azure has slowly moved away from the classic ‘service management’ model to JSON-based resource manager (ARM) templates, it’s clear that a professional developer tool that understands the JSON syntax, and can help debug and even deploy resources in Azure, is the best way to go.

Not only does VS understand how to parse JSON files, but integration with Azure provides access to Cloud Explorer for resource management and predefined templates so that you can quickly deploy new resources without leaving VS. Having used VS to deploy resources in Azure, this is the way I now recommend you work with Azure templates.

For more information on Azure Resource Manager, see What are Microsoft Azure Resource Groups? and Deploy VMs Using Azure Resource Manager on the Petri IT Knowledgebase.

Install Visual Studio

First you will need to install Visual Studio 15 Community, Professional or Enterprise Edition, with the Azure SDK for .NET. Community Edition is free, so that’s the version I’ll be installing. You can either download and install VS from Microsoft’s website and then add the Azure SDK later, or use Microsoft’s Web Platform Installer (Web PI) to install ‘VS Community Edition with the Azure SDK’ already configured.

VS Community Edition can be downloaded here, and the Web PI here, which can be used to get the Azure SDK for .NET as a separate download, and VS Community Edition with Azure SDK for .NET as an integrated package. When installing VS, make sure you accept the default install options.

Create a New Project in VS

Let’s create a new project in VS and use one of the templates that are provided with the Azure SDK to deploy a VM running Windows Server 2012 R2.

  • Launch VS 2015.

Notice that you should already be logged in to Azure with the Microsoft account that you use to sign in to Windows. The Cloud Explorer pane in VS will show a hierarchical list of resources in your Azure subscription if you have successfully logged in to Azure. If that’s not the case, click on your name in the top right of VS and select Account settings… from the menu to change the login account.

Create a new project in Visual Studio 2015 (Image Credit: Russell Smith)

Create a new project in Visual Studio 2015 (Image Credit: Russell Smith)

  • Click New Project… under Start on the Start Page tab in VS.
  • In the New Project dialog box, expand Installed > Templates > Visual C# and select Cloud.
  • In the list of templates, double-click Azure Resource Group.
Deploy a new Azure Resource Group using a template (Image Credit: Russell Smith)

Deploy a new Azure Resource Group using a template (Image Credit: Russell Smith)

  • In the Select Azure Template dialog, select Windows Virtual Machine from the list on the left and then click OK.
  • In the Solution Explorer pane, expand the Templates folder.
Choose the type of resource to deploy (Image Credit: Russell Smith)

Choose the type of resource to deploy (Image Credit: Russell Smith)

Here you’ll see two files: WindowsVirtualMachine.json and WindowsVirtualMachine.parameters.json. The first file is a template that defines the Azure resource(s) to be created or updated, and the second file contains a list of parameters for the deployment that might change each time a resource is deployed using the template, such as the VM administrator username.

  • Double-click WindowsVirtualMachine.json in the Solution Explorer pane to open the code in VS.
  • Notice in the JSON Outline tab that the template is divided into three sections: parameters, variables, and resources.
Use JSON Outline to navigate the template (Image Credit: Russell Smith)

Use JSON Outline to navigate the template (Image Credit: Russell Smith)

The parameters can either be provided in the WindowsVirtualMachine.parameters.json file, or using the new Azure web portal. In this case, the two parameters are set to null, so we’ll edit the file later. Alternatively, these parameter values can be provided at deployment time.

Sponsored

Modify the Template

Now we have a template that looks something like what we need. Let’s delete anything that we don’t want and edit what doesn’t quite fit our requirements.

  • Using the JSON Outline tab, expand resources > VirtualMachine, right click AzureDiagnostics and select Delete from the menu. I might not delete this resource under normal circumstances, but just to show you how easy it is in VS. If you modify a template and VS detects any errors in the code, they will be highlighted by red marks in the margin to the right.
  • Expand variables in the JSON Outline tab and click the vmSize variable. In the code tab, change “Standard_A2” to “Standard_A1”. This will give us a smaller VM size.

Add a Resource

You saw how to delete a resource from a template, and it’s just as easy to add resources.

  • In the JSON Outline tab, right-click resources and select Add New Resource from the menu.
  • In the list of available resources in the Add Resource dialog, select Availability Set.
  • In the Name field, type DCs and then click Add.
Add a resource to the template (Image Credit: Russell Smith)

Add a resource to the template (Image Credit: Russell Smith)

You’ll now see DCs appear as a new resource in the resources section of JSON Outline tab. For more information on Availability Sets, see Understanding Azure Availability Sets on Petri.

  • Click CTRL+S to save changes you’ve made to the template.

Add Values to the Parameters File

Before we can deploy the resource, let’s provide values for the adminUsername and dnsNameForPublicIP parameters.

Modify the parameters file (Image Credit: Russell Smith)

Modify the parameters file (Image Credit: Russell Smith)

  • Double-click WindowsVirtualMachine.parameters.json in the Solution Explorer tab.
  • In the script pane below the adminUsername parameter, replace null with “VMadminuser”.
  • Similarly, replace null below dnsNameForPublicIP with a DNS name for the VM’s public IP address. I’ve used “petrilabvm”, note the use of all lowercase letters, but you should choose something appropriate for your deployment.
  • Click CTRL+S to save to parameters file.

Deploy the Solution

The template is now ready, so let’s try and deploy the VM and associated resources. Note that Visual Studio must be launched using an administrator account before you can deploy a solution.

Deploy the solution (Image Credit: Russell Smith)

Deploy the solution (Image Credit: Russell Smith)

  • Right-click AzureResourceGroup1 in Solution Explorer and select Deploy > New Deployment… from the menu.
  • In the Deploy to Resource Group dialog, select <Create New…> in the Resource group menu.
  • In the Create Resource Group dialog, type AzureResourceGroup1 in the Resource group name field and then select a region from the Resource group location menu and click Create.
  • In the Deploy to Resource Group dialog, click Deploy.
  • In the Edit Parameters dialog, enter a password for the VM’s administrator account and a name for the Availability Set, and then click Save.
Deploy the solution (Image Credit: Russell Smith)

Deploy the solution (Image Credit: Russell Smith)

The deployment will begin. You may be asked to enter a value for adminPassword even if you entered one in the previous steps. The output window shows the status of the deployment and hopefully the resources will be deployed successfully.

Deploy the solution (Image Credit: Russell Smith)

Deploy the solution (Image Credit: Russell Smith)

Check VM Status

Once the solution has been deployed successfully, you can check the status of the VM using Cloud Explorer.

Sponsored

  • Select Cloud Explorer from the View menu.
  • If Cloud Explorer was already open, you might need to refresh the view by clicking the Refresh icon below Microsoft Azure.
  • Expand Virtual Machines and click MyWindowsVM.
  • Switch to the Properties tab at the bottom of Cloud Explorer.
  • Check the State property is set to running.

In this article, I’ve shown you how to use an ARM template in Visual Studio to deploy a virtual machine in Microsoft Azure.

The post Microsoft Azure: Use Visual Studio to Deploy a Virtual Machine appeared first on Petri.