Raspberry Pi based computer offers Real-Time Ethernet

The content below is taken from the original (Raspberry Pi based computer offers Real-Time Ethernet), to continue reading please visit the site. Remember to respect the Author & Copyright.

Hilscher is prepping a rugged “netPI” computer that combines a Raspberry Pi 3 with its “netHAT 52-RTE” RPi add-on featuring dual Real-Time Ethernet ports. German Real-Time Ethernet experts Hilscher will soon launch a Raspberry Pi 3-based industrial computer with Real-Time Ethernet support. Hilscher has yet to formally announce the ruggedized netPI computer, but the board […]

Scientists turn human kidney cells into tiny biocomputers

The content below is taken from the original (Scientists turn human kidney cells into tiny biocomputers), to continue reading please visit the site. Remember to respect the Author & Copyright.

A team of scientists from Boston University have found a way to hack into mammalian cells — human cells, even — and make them follow logical instructions like computers can. While they’re not the first researchers to program cells to do their bidding, previous successful studies mostly used Escherichia coli, which are much easier to manipulate. These researchers were able to program human kidney cells into obeying 109 different sets of instructions, including responding to particular environmental conditions and following specific directions.

They were able to accomplish something other teams failed to do so by using DNA recombinases, genetic recombination enzymes that can recognize and stitch together two targets in a DNA strand and cut out anything in between. To trigger the recombinases, they inserted another gene in the same cells to start the cutting process.

Here’s one sample of how it works: The researchers programmed cells to light up when they did NOT contain the DNA recombinase they used. In the future, they could use proteins associated with specific diseases to use technique as a diagnostic tool, since the samples would light up if the patient has the illness.

Wong says their current sets of instructions are just proofs of concept. Other potential applications include manipulating T cells into killing tumors by using proteins that can detect two to three cancer cell biomarkers. The technique could also be used to turn stem cells into any cells they want by using different signals, as well as to generate tissues on command. Wong and his team are only exploring those possibilities at the moment, though, and it’ll take time before we see them happen.

Source: Wired, Science, Nature Biotechnology

Get ESXi host name for VDI client

The content below is taken from the original (Get ESXi host name for VDI client), to continue reading please visit the site. Remember to respect the Author & Copyright.

I'm trying to figure out if it is possible to retrieve the current ESXi host that a VDI client is sitting on from the Windows command line/powershell/some other tool. I have a situation with multiple ESXi hosts, and VDI clients are spun up/down randomly throughout the day. They could potentially be on any of the hosts at any given time, but I have some scripted operations that need to run based on the specific host that it is currently running on. Any way to pull the name/IP of the host locally from the VDI client?

Gigantic drones may be the key to low-cost air shipping

The content below is taken from the original (Gigantic drones may be the key to low-cost air shipping), to continue reading please visit the site. Remember to respect the Author & Copyright.

Wonder why some companies still ship products on boats instead of speedy aircraft? It’s because air freight is much more expensive — the costs of the crew and fuel quickly add up. Natilus, however, thinks drones might offer a solution. The startup is prepping enormous, 200ft-long drones (roughly the size of a Boeing 777) that would haul up to 200,000lbs of cargo over the ocean. They’d theoretically reduce the cost of air freight in half by eliminating the crew and improving fuel efficiency. And while the drone likely wouldn’t be cleared to fly over populated areas, that wouldn’t matter — it’s designed to land on water and unload its goods at a seaport.

The idea is ambitious, to say the least, but there is a practical roadmap for making it a reality. A 30-foot prototype is poised to fly near San Francisco this summer. If that goes well, the next steps are finishing a full-scale prototype (due in 2020) and taking customers.

The main obstacle? Funding. As Fast Company explains, Natilus is currently a tiny company with three regular employees and under $1 million to its name. It’s going to need a lot of interest from investors to make its drones a reality. Thankfully, that might not be too hard. If the project works as planned, it could cut overseas shipping times down to less than a day without leading to absurd costs. You’d be more likely to get your online orders quickly, and it would be more practical to ship time-sensitive products like food.

Via: Fast Company

Source: Natilus

Cheap, flimsy, breakable and replaceable – yup, Ikea, you’ll be right at home in the IoT world

The content below is taken from the original (Cheap, flimsy, breakable and replaceable – yup, Ikea, you’ll be right at home in the IoT world), to continue reading please visit the site. Remember to respect the Author & Copyright.

Analysis Ikea has just announced the entry of smart home technology into the mainstream with a new range of lights that can be activated by motion or smartphone app.

The Trådfri lineup is quite extensive – four lightbulbs, three light panels, five cabinet lights, and four sensors/gateways – and is so far only available in the company’s home country of Sweden. Trådfri means “wireless” in Swedish. The lights can turn on and off, be put on timers and change color according to sensors and a smartphone app.

The product line represents a big step forward for a market that has suffered from high expectation and low growth. Despite ready availability of such products – largely defined with a connection to the internet or a smartphone – take-up has been slow and largely confined to early adopters.

Ikea has brought its unique combination of good style with a low price: its entry bundle of a gateway, remote control and two bulbs costs 749 Swedish krona – equivalent to $85 – which is a pretty good price. It’s not as clumsy or clunky as many other products out there. This being Ikea it has also, presumably, resolved technical and security issues that continue to plague the rest of the market.

Ikea’s new smart lighting range

That said, this is very far from cutting-edge technology and it suffers many of the same issues that have stymied growth in the smart home market. Most significantly: a lack of interoperability and the need for a gateway.

A few years ago, when sales of smart lightbulbs, smart sockets, smart thermostats and smart locks was expected to explode, everyone and their dog pushed out products using one of the two main standards – ZigBee and Z-Wave – and made it all work through their own gateway (Ikea is going for ZigBee).

Networking

It didn’t take people long to realize, however, that for smart homes to take off, manufacturers needed to think in terms of eco-systems: you have to be able to control your smart lights and door lock in the same app.

No one wants to have to open one app for this light, and another one for that light; one to control your thermostat and another for your camera. On top of which, no one but no one wants to have five different gateways all plugged into your modem or router. Most households have a router that takes, at most, four Ethernet cables.

Very few houses are going to invest in extra networking gear just to be able to turn a light on and off with an app when you are five feet from the switch anyway.

And so the wise smart home companies are increasingly working on how to get their products communicating and coordinating with others. Apple, of course, believes it is so great that it can create its own entire ecosystem and people will pay it to enter. Its HomeKit solution is finally showing signs of life, but it is four years later than it planned and its insistence on inserting its own Apple-approved chip into everyone’s HomeKit products did not bring it a lot of love.

Poster-child Nest – which, to our eyes, is still hands-down the best smart home company out there – has tried to build on its rockstar status by creating a mini “works with Nest” ecosystem using its own proprietary protocols. Parent company Google/Alphabet recently killed that approach by effectively telling Nest to use its now-open Weave and Thread protocols/standards.

Meanwhile, ZigBee looks safe, mostly because Thread has decided to interoperate with it, and Z-Wave is desperately clinging on, trying to maintain its more tightly controlled approach (which has its benefits) without giving away control and revenue.

I did it Mywåy

All of which is to say that Ikea is largely doing its own thing, as it often does. Everyone who has ever tried to find one of those uniquely designed metal thingymajigs to fix their Ikea furniture knows that the company is its own self-contained universe. (Likewise just about every other Ikea fitting.)

But that is what makes Ikea, Ikea. It does a really good job and does it cheaply enough that when your furniture breaks and you can’t find any way to fix it, you scrap it and buy another one – from Ikea.

In this smart home lighting move, it has created its own world and it may well work – Ikea shoppers will see what will no doubt be an impressive display while looking at something else, and will snap it up. Whether that causes a broader take-up by the rest of the smart home market is far less certain.

It is also worth noting that despite at least 10 different companies offering variations of the smart lightbulb, there is still nothing compelling enough to justify the extra cost.

Yes, you can get out your mobile phone and turn on the light on the other side of the room. You can also get out your chair and hit the switch. The latter approach is usually faster.

The only silver lining is the use of Amazon’s Alexa or the Google Home to give voice commands to turn lights on and off.

This works to a large degree and, as with all voice-activated efforts, is significantly less annoying than fumbling about with an app. That is, when the machine hears you correctly. In short, we’re still not there. But Ikea has definitely indicated that we will be at some point. ®

Hacked IoT Switch Gains I2C Super Powers

The content below is taken from the original (Hacked IoT Switch Gains I2C Super Powers), to continue reading please visit the site. Remember to respect the Author & Copyright.

Economies of scale and mass production bring us tons of stuff for not much money. And sometimes, that stuff is hackable. Case in point: the $5 Sonoff WiFi Smart Switch has an ESP8266 inside but the firmware isn’t very flexible. The device is equipped with the bare minimum 1 MB of SPI flash memory. Even worse, it doesn’t have the I2C ports exposed so that you can’t just connect up your own sensors and make them much more than just a switch. But that’s why we have soldering irons, right?

[Jack] fixed his, and documented the procedure. He starts off by soldering a female header to the board so that he can upload his own firmware. (Do not do this when it’s plugged into the wall, you could get electrocuted. Supply your own 3.3 V.) Next, he desolders the flash memory and replaces it with a roomy 4 MB chip, because he might want to upgrade the firmware over the air, or just run some really, really big code. So far, so good. He’s got a better version of the same thing.

But breaking out the ESP8266’s I2C pins turns this little “switch” into something much more useful — a wall-powered IoT sensor node in a sweet little package, with a switch attached. It’s just a matter of tacking two wires onto the incredibly tiny pins of the ESP8266 package. The good news is that the I2C pins are on the edge of the package, but you’re going to want your fine-tipped iron and some magnification regardless.

That’s it. Flash in a better firmware, connect up whatever I2C devices you’d like, and you’ve got a very capable addition to your home automation family for just a few bucks. Just for completeness, here’s the warning again: this device uses a non-isolated power supply, so if the neutral in your wall isn’t neutral, you can get shocked. And stay tuned for a full-length article on transformerless power supplies, coming up!

Azure Relay Hybrid Connections is generally available

The content below is taken from the original (Azure Relay Hybrid Connections is generally available), to continue reading please visit the site. Remember to respect the Author & Copyright.

The Azure Relay service was one of the first core Azure services. Today’s announcement shows that it has grown up nicely with the times. For those familiar with the WCF Relay feature of Azure Relay rest assured it will continue to function, but its dependency on Windows Communication Foundation is not for everyone. The Hybrid Connections feature of Azure Relay sheds this dependency by utilizing open standards based protocols.

Hybrid Connections contains a lot of the same functionality as WCF Relay including:

  • Secure connectivity of on-premises assets and the cloud
  • Firewall friendliness as it utilizes common outbound ports
  • Network management friendliness that won’t require a major reconfiguration of your network

The differences between the two are even better!

  • Open standards based protocol and not proprietary! WebSockets vs. WCF
  • Hybrid Connections is cross platform, using Windows, Linux or any platform that supports WebSockets
  • Hybrid Connections supports .NET Core, JavaScript/Node.js, and multiple RPC programming models to achieve your objectives

Getting started with Azure Relay Hybrid Connections is simple and easy with steps here for .NET and Node.js.

If you want to try it and we hope you do, you can find out more about Hybrid Connections pricing and the Azure Relay offering.

The real reason shadow IT is so widespread

The content below is taken from the original (The real reason shadow IT is so widespread), to continue reading please visit the site. Remember to respect the Author & Copyright.

At your company, who’s responsible for what technology is bought and implemented?

It’s a critical question, with deep implications for how your company leverages technology to get things done and drive competitive advantage. A recent survey from Spiceworks takes a stab at answering this question. But while the survey offers a number of insights, it leaves out perhaps the most important constituency in the procurement process.

As you can surmise from the title—ITDMs and BDMs: Tech Purchase Superheroes—the Spiceworks survey was taken mostly from the standpoint of vendors trying to sell you hardware, software and services. It focuses on teasing out the differences between two key groups: IT decision makers (ITDMs) and business decision makers (BDMs). Amidst perceptions that the balance of power is shifting from IT to the business, the survey attempts to find out if the two groups work together in a smooth, well-oiled process or if they struggle to coordinate separate agendas. 

Here are some of the key findings: 

IT is typically more responsible for the initial phases of of the purchase process, including determining needs (84 percent to 53 percent), evaluating solutions, (86 percent to 48 percent) and making recommendations (86 percent to 44 percent).

But not surprisingly, the business often takes over when it comes to making the final purchase decision (52 percent to 33 percent), approving funds (50 percent to 13 percent) and approving purchases (47 percent to 22 percent). 

Ultimately, money talks, of course, and while IT folks may be the gatekeepers, they know what it’s like for someone else to hold the purse strings. ITDMs see themselves as part of the decision making team for only about half of the purchases they’re involved in, compared to two-thirds of BDMs in that situation.

What’s wrong with this picture?

So far, so good. But if you ask me, something—or someone—important is missing in this scenario. Put simply, where are the users in this process? As the BDMs and ITDMs are busy perusing the menu as they decide what tools and technologies the company should buy, users are often left on the sidelines choking down whatever solutions get handed to them. And on the flip side, since users are frequently not a formal part of the technology purchase process, technology vendors too often ignore them completely. 

The result? The rise of “shadow IT” to disrupt traditional IT procurement processes. In the world of the cloud and Everything-as-a-Service, users no longer have to wait for an alphabet soup of BDMs and ITDMs to decide what’s best for them, and very often they don’t.

Sure, shadow IT can actually help IT in a variety of ways, but it also carries big risks. If companies want to keep a lid on the reach of shadow IT—forget about eliminating it altogether; that train has left the station—they need to be proactive about involving users in the technology buying process.

Join the Network World communities on

Facebook

and

LinkedIn

to comment on topics that are top of mind.

Azure Data Factory’s Data Movement is now available in the UK

The content below is taken from the original (Azure Data Factory’s Data Movement is now available in the UK), to continue reading please visit the site. Remember to respect the Author & Copyright.

Data Movement is a feature of Azure Data Factory that enables cloud-based data integration, which orchestrates and automates the movement and transformation of data. You can now create data integration solutions using Azure Data Factory that can ingest data from various data stores, transform/process data, and publish results to the data stores. 

Moreover, you can now utilize Azure Data Factory for both your cloud and hybrid data movement needs with the UK data store. For instance, when copying data from a cloud data source to an Azure store located in the UK, Data Movement service in UK South will perform the copy and ensure compliance with data residency.

Note: Azure Data Factory itself does not store any data, but instead lets you create data-driven flows to orchestrate movement of data between supported data stores and the processing of data using compute services in other regions or in an on-premises environment.

To learn more about using Azure Data Factory for data movement, view the Move data by using Copy Activity article. 

You can also go to Azure.com learn more about Azure Data Factory or view more in depth Azure Data Factory information documentation.

Free patch available for PhotoDesk 3.14

The content below is taken from the original (Free patch available for PhotoDesk 3.14), to continue reading please visit the site. Remember to respect the Author & Copyright.

CJE Micro’s has made a free patch available for photo editing and retouching package PhotoDesk, to address a problem experienced by some users of version 3.14 when using the application’s Special Effects filters. In some cases, use of the filters can cause what Chris Evans described as a “serious crash” – which, judging by a […]

Mass-produced artificial blood is now a real possibility

The content below is taken from the original (Mass-produced artificial blood is now a real possibility), to continue reading please visit the site. Remember to respect the Author & Copyright.

Doctors dream of having artificial blood always on hand, but the reality has usually been very different. While you can produce red blood cells in a lab, the current technique (which prods stem cells into action) only nets a small number of them at best. British researchers appear to have found the solution, however: they’ve developed a technique that can reliably produce an unlimited number of red blood cells. The trick is to create "immortalized" premature red blood cells that you can culture as much as you like, making mass production a real possibility.

The biggest challenge is translating the technique to commercial manufacturing. Scientists have produced a few liters of blood in the lab, but there’s a big difference between that and the massive volumes needed to serve even a single hospital. Although the UK’s National Health Service is planning to trial artificial blood this year, this new technique won’t be involved.

As it is, you wouldn’t likely see a wholesale switch to artificial blood even if this new method was ready for the real world. Any mass production is most likely to focus on people with rare blood types that can’t always count on donations. Even that limited effort could make a huge difference, mind you. Hospitals could always have a consistent supply of rare blood, so you wouldn’t have to worry about them running out in a life-or-death situation.

Via: BBC, Digital Journal

Source: University of Bristol, Nature

Happy Motherboards day: Here’s some (Optane) memory

The content below is taken from the original (Happy Motherboards day: Here’s some (Optane) memory), to continue reading please visit the site. Remember to respect the Author & Copyright.

Happy Motherboards day: Here’s some (Optane) memory

No benchmarks available from 2D launch of 3D XPoint memory

Optane_memory

Optane 2280 with M.2 connector

Hot on the heels of the Optane DC P4800X data centre SSD announcement, Intel makes a move on PC motherboard memory.

Optane uses co-developed Intel and Micron 3D XPoint memory, said to be not as fast as DRAM, but faster than NAND, while having NAND’s non-volatility and pricing between the two.

The M.2-connected Optane 2280 comes in 16GB and 32GB guises and is for Intel’s seventh-generation Core i7 processors (Kaby Lake). It is a single-sided device using 3D Xpoint media and has a PCIe Gen 3 x 2 interface; two lanes, not four.

These two products were first revealed at the January CES show.

It is used as a cache, via Intel’s Rapid Storage Technology. Chipzilla says: “Files needed for important tasks are immediately recognised and accelerated. Over time, frequently used files and applications are monitored and accelerated as well.” This will, Intel claims, “enhance the PC Experience”.

Optane_Memory_events

Intel says Optane Memory will boost PC app launch and file load events – without saying by how much – and is pushing the product to gamers who have a need for speed.

However, no benchmark results have been released relating to boot, application launch or run times. That means we don’t know how much faster an Optane-on-the-motherboard-equipped PC will be than a similar PC using M.2-connected flash memory. Obviously the Optane motherboard will be faster than a PC using disk drives with no Optane or flash caching but, duh, yeah, so what?

Our understanding is that product reviews will come out on April 24 so anyone pre-ordering kit is buying on hope and not any sort of performance reality.

Intel says there are more than 130 motherboards available that support Optane Memory, including ASRock, ASUS, EVGA, Gigabyte and MSI. In the second quarter we should see systems supporting Optane Memory from Acer, Dell, HP, Lenovo and others.

Optane memory DIMM pre-orders start today with shipping commencing April 24. We haven’t seen any pricing information yet so we neither know the bangs for bucks number or indeed the bangs number on its own. Hold fire for now. ®

Skype for Business admins get tool to diagnose call problems

The content below is taken from the original (Skype for Business admins get tool to diagnose call problems), to continue reading please visit the site. Remember to respect the Author & Copyright.

IT administrators who manage a fleet of Skype for Business users will have an easier time of diagnosing and fixing problems that may arise for them. Microsoft unveiled the beta of a new Call Analytics Dashboard on Monday, which is supposed to provide admins with a diagnosis of issues that users are having on a call.

There are several different issues that could arise and cause a degradation in call quality, which is why these analytics are helpful. If a user complains about a call only working intermittently, it can be hard to diagnose whether that’s an issue with their network connection, headset, Microsoft’s infrastructure, or something else.

Companies may be more likely to migrate from their legacy communications infrastructure to Skype for Business with the existence of the new dashboard, since understanding issues that crop up can help with the transition.

That dashboard is one of a handful of Skype for Business features Microsoft announced Monday, as part of the Enterprise Connect unified communications conference.

The company also added two new capabilities aimed at serving call centers. Auto Attendant lets businesses set up a system of menus that callers can navigate using their phone keypad. (Think: “For warranty claims, press 1.”)

Call Queues are built for environments like customer service hotlines where there are groups of Skype for Business users who could all answer the same incoming call. Callers are placed into a queue based on when they dialed in, and are automatically routed to the next available employee.

Both of those features are only available for companies using Skype for Business’s Cloud PBX feature, which is included in Microsoft’s premium Office 365 E5 subscription.

Join the Network World communities on

Facebook

and

LinkedIn

to comment on topics that are top of mind.

OpenStack Developer Mailing List Digest March 18-24

The content below is taken from the original (OpenStack Developer Mailing List Digest March 18-24), to continue reading please visit the site. Remember to respect the Author & Copyright.

SuccessBot Says

  • Yolanda [1]: Wiki problems have been fixed, it’s up and running
  • johnthetubaguy [2]: First few patches adding real docs for policy have now merged in Nova. A much improved sample file [3].
  • Tell us yours via OpenStack IRC channels with message “#success <message>”
  • All: [4]

Release Naming for R

  • It’s time to pick a name for our “R” release.
  • The assoicated summit will be in Vancouver, so the geographic location has been chosen as “British Colombia”.
  • Rules:
    • Each release name must start with the letter of the ISO basic Latin alphabet following the initial letter of the previous release, starting with the initial release of “Austin”. After “Z”, the next name should start with “A” again.
    • The name must be composed only of the 26 characters of the ISO basic Latin alphabet. Names which can be transliterated into this character set are also acceptable.
    • The name must refer to the physical or human geography of the region encompassing the location of the OpenStack design summit for the corresponding release. The exact boundaries of the geographic region under consideration must be declared before the opening of nominations, as part of the initiation of the selection process.
    • The name must be a single word with a maximum of 10 characters. Words that describe the feature should not be included, so “Foo City” or “Foo Peak” would both be eligible as “Foo”.
  • Full thread [5]

Moving Gnocchi out

  • The project Gnocchi which has been tagged independent since it’s inception has potential outside of OpenStack.
  • Being part of the big tent helped the project be built, but there is a belief that it restrains its adoption outside of OpenStack.
  • The team has decided to move it out of OpenStack [6].
    • In addition out of the OpenStack infrastructure.
  • Gnocchi will continue thrive and be used by OpenStack such as Ceilometer.
  • Full thread [7]

POST /api-wg/news

  • Guides under review:
    • Define pagination guidelines (recently rebooted) [8]
    • Create a new set of api stability guidelines [9]
    • Microversions: add next_min_version field in version body [10]
    • Mention max length limit information for tags [11]
    • Add API capabilities discovery guideline [12]
    • WIP: microversion architecture archival doc (very early; not yet ready for review) [13]
  • Full thread [14]

 

Ofcom wants automatic compensation for the people when ISPs fail

The content below is taken from the original (Ofcom wants automatic compensation for the people when ISPs fail), to continue reading please visit the site. Remember to respect the Author & Copyright.

Ofcom has begun consulting on the UK government’s desire to compensate consumers and SMEs when telco companies fail, as set out in the Digital Economy Bill, even though the Bill hasn’t reached the Royal Assent stage yet.

Ofcom interprets the crowd-pleasing gesture as involving automatic compensation for delayed services, delayed repairs or missed appointments. It wants SMEs to be able to receive compensation too, as well as consumers.

In response, BT, Sky and Virgin Media have put forward a voluntary alternative code of practice, which Ofcom doesn’t think goes far enough. [Surely that’s Parliament’s job? – ed.]

Unfulfilled installations would incur a compensation payment of £6 for each day and delayed repairs £10 a day, if the repair isn’t made after two days. Missed appointments incur an automatic £30 compensation fee. The charges would be received by the customer within 30 days as a credit or cash payment.

The pledge in the Digital Economy Bill takes the form of an amendment to Section 51 of the 2003 Communications Act.

The Bill’s language (above) doesn’t specify telcos, but actually refers to “digital services”, raising the tantalising prospect of punters getting a fiver out of Wikipedia for an incorrect Wikifactoid, or from Google for some irrelevant Google search results. But alas, the 2003 chapter refers to electronic communication networks, not the “digital” services on top of them.

“Today’s proposals apply to fixed broadband and landline telephone services only,” Ofcom reminds us.

Phew. Where would it all end? ®

Introducing Backup Pre-Checks for Backup of Azure VMs

The content below is taken from the original (Introducing Backup Pre-Checks for Backup of Azure VMs), to continue reading please visit the site. Remember to respect the Author & Copyright.

Over the past couple of weeks, we have announced multiple enhancements for backup and recovery of both Windows and Linux Azure Virtual Machines that reinforce Azure Backup’s cloud-first approach of backing up critical enterprise data in Azure. Enterprise production environments in Azure are becoming increasingly dynamic and are characterized by frequent VM configuration changes (such as network or platform related updates) that can adversely impact backup. Today, we are taking a step to enable customers to monitor the impact of configuration changes and take steps to ensure the continuity of successful backup operations. We are excited to announce the preview of Backup Pre-Checks for Azure Virtual Machines.

Backup Pre-Checks, as the name suggests, check your VMs’ configuration for issues that can adversely affect backups, aggregate this information so you can view it directly from the Recovery Services Vault dashboard and provide recommendations for corrective measures to ensure successful file-consistent or application-consistent backups, wherever applicable. All this without any infrastructure and at no additional cost.

Backup Pre-Checks run as part of the scheduled backup operations for your Azure VMs and complete with one of the following states:

  • Passed: This state indicates that your VMs configuration is conducive for successful backups and no corrective action needs to be taken.
  • Warning: This state indicates one or more issues in VM’s configuration that might lead to backup failures and provides recommended steps to ensure successful backups. Not having the latest VM Agent installed, for example, can cause backups to fail intermittently and falls in this class of issues.
  • Critical: This state indicates one or more critical issues in the VM’s configuration that will lead to backup failures and provides required steps to ensure successful backups. A network issue caused due to an update to the NSG rules of a VM, for example, will fail backups as it prevents the VM from communicating with the Azure Backup service and falls in this class of issues.

Value proposition

  • Identify and Monitor VM configuration issues at scale: With the aggregated view of the Backup Pre-Check status across all VMs on the Recovery Services Vault, you can keep track of how many VMs need corrective configuration changes to ensure successful backups
  • Resolve configuration issues more efficiently: Use the Backup pre-check states to rank which VMs need configuration changes. Address the “Critical” Backup Pre-Check status for your VMs first, using the specific required steps and ensure their successful backups before addressing the “Warning” Backup Pre-check states for your VMs.
  • Automated execution: You don’t need to maintain or apply separate schedules for Backup Pre-Checks as they are integrated with existing backup schedules and therefore are assured to execute automatically and get the latest VM configuration information at the same cadence as their backups.

Getting started

Follow the steps below to start resolving any issues reported by Backup Pre-Checks for Virtual Machine backups on your Recovery Services Vault.

  • Click on the ‘Backup Pre-Check Status (Azure VMs)’ tile on the Recovery Services Vault dashboard.
  • Click on any VM with Backup Pre-Check status of either Critical or Warning. This would open the VM details blade.
  • Click on the blade notification on the top of the blade to reveal the configuration issue description and remedial steps.

Related links and additional content

Good news, everyone! Two pints a day keep heart problems at bay

The content below is taken from the original (Good news, everyone! Two pints a day keep heart problems at bay), to continue reading please visit the site. Remember to respect the Author & Copyright.

Moderate drinking is good for you, a BMJ-published study has found, directly contradicting the advice of the UK government’s “Chief Medical Officer”, who advised last year there was “no safe level” of drinking. A daily pint reduces risk of a heart attack and angina by a third, a big data study of Brit adults has found, while total abstinence increases the risk by 24 per cent.

The proposition that alcohol has health benefits, and teetotalism invites greater health risks, has been established for a long time. A metastudy by Sheffield University noted that four out of five studies examined showed moderate drinking correlated with a reduction in mortality, with “moderate” defined as around three pints of beer a day for men, and two glasses of wine for women (as recently as the 1960s, official health advice suggested that a bottle of wine a day was fine).

But the moderation message alarmed puritanical health campaigners and prohibitionists, who found a champion in civil servant Dame Sally Davies. Davies declared in January 2016 that there was “no safe level” of drinking based on highly contested “evidence”. Campaigners argued that the samples of teetotallers in studies included former drinkers, who had already been “damaged” by years of drinking, and the conclusions were therefore unsound.

The study published in the BMJ today demolishes that challenge, by separating out former drinkers from never-drinkers. Examining 1.93 million UK health records, researchers at the University of Cambridge and University College London concluded that “the protective effect observed for moderate drinking and major clinical outcomes such as myocardial infarction, ischaemic stroke, sudden coronary death, heart failure, peripheral arterial disease, and abdominal aortic aneurysm is present even after separation of the group of current non-drinkers into more specific categories.” [our emphasis] The results for teetotallers are as follows:

Non-drinking was associated with an increased risk of unstable angina (hazard ratio 1.33, 95% confidence interval 1.21 to 1.45), myocardial infarction (1.32, 1.24 to1.41), unheralded coronary death (1.56, 1.38 to 1.76), heart failure (1.24, 1.11 to 1.38), ischaemic stroke (1.12, 1.01 to 1.24), peripheral arterial disease (1.22, 1.13 to 1.32), and abdominal aortic aneurysm (1.32, 1.17 to 1.49) compared with moderate drinking (consumption within contemporaneous UK weekly/daily guidelines of 21/3 and 14/2 units for men and women, respectively). Heavy drinking (exceeding guidelines) conferred an increased risk of presenting with unheralded coronary death (1.21, 1.08 to 1.35), heart failure (1.22, 1.08 to 1.37), cardiac arrest (1.50, 1.26 to 1.77), transient ischaemic attack (1.11, 1.02 to 1.37), ischaemic stroke (1.33, 1.09 to 1.63), intracerebral haemorrhage (1.37, 1.16 to 1.62), and peripheral arterial disease (1.35; 1.23 to 1.48), but a lower risk of myocardial infarction (0.88, 0.79 to 1.00) or stable angina (0.93, 0.86 to 1.00).

And in two pictures:

It should be noted that until recently journals rejected studies where the headline RR (relative risk or risk ratio) was under 3.0. However, more than 80 previous studies have come to the same conclusion of alcohol risk, showing a J-curve. The risk of total abstinence is higher than moderate consumption (20g a day, or two pints of beer a day for adult males) then increases as consumption increases.

Perhaps wary of the reaction from the prohibitionists, the researchers stop short of recommending a change in the official guidelines, merely a more “nuanced” message.

But the evidence seems compelling: if we must eat “five a day”, shouldn’t we also drink “two a day”?®

Related Link

The BMJ

CoreOS extends its Tectonic Kubernetes service to Azure and OpenStack

The content below is taken from the original (CoreOS extends its Tectonic Kubernetes service to Azure and OpenStack), to continue reading please visit the site. Remember to respect the Author & Copyright.

While CoreOS is probably still best known for its Linux distribution, that was only the company’s gateway drug to a wider range of services. Tectonic, the company’s service for running Kubernetes-based container deployments in the enterprise, sits at the core of its business. Until now, Tectonic could only be used for installing and managing Kubernetes on bare-metal and AWS, but starting today, it will also support Azure and OpenStack. Support for these two platforms is currently in preview.

In practice, this means that the CoreOS Tectonic Installer, which will be available under an open source license, now allows you to set up Kubernetes clusters on Azure and OpenStack. Google’s Cloud Platform is obviously still missing from this list, but chances are CoreOS will also add support for the Google Cloud in the future (assuming there is enough demand).

As before, Tectonic remains free for deployments on up to 10 nodes. To help new users get started with this technology, the company also released a few hands-on tutorials that provide step-by-step instructions for setting up a Kubernetes cluster with the help of its service.

CoreOS’ other main service is Quay, its container registry for the enterprise. It’s extending Quay to offer better support for Kubernetes-based applications, which can often include multiple container images (plus the configuration files to make them all work together).

“By leveraging a new registry plugin, Helm can now interact directly with Quay to pull an application definition and then use this to retrieve the necessary images and apply the configurations to ensure the application is successfully deployed,” the company explains in today’s announcement. “All of this is done through a community-driven API specification, called App Registry, that enables the Kubernetes ecosystem to develop more sophisticated tools and more reliable deployment pipelines.”

Featured Image: Martin Abegglen/Flickr UNDER A CC BY-SA 2.0 LICENSE

The End of Backup

The content below is taken from the original (The End of Backup), to continue reading please visit the site. Remember to respect the Author & Copyright.

Andres Rodriguez<br/>Nasuni

Andres Rodriguez

Nasuni

Andres Rodriguez is CEO of Nasuni.

No one loves their backup, and when I was CTO of the New York Times, I was no exception.  Traditional data protection has only survived this long because the alternative is worse: losing data is one of those job ending—if not career ending—events in IT. Backup is like an insurance policy. It provides protection against an exceptional and unwanted event.

But like insurance policies, backups are expensive, and they don’t add any additional functionality. Your car doesn’t go any faster because you’re insured and your production system doesn’t run any better with backup. As many IT professionals have discovered too late, backups are also unreliable, a situation made even worse by the fact that bad backups typically aren’t discovered until there’s a need to restore. At that point, IT is really out of luck.

Fortunately, backup as we have known it is ending. Significant improvements in virtualization, synchronization and replication have converged to deliver production systems that incorporate point-in-time recovery and data protection as an integral component. These new data protection technologies are no longer engaged only when a system fails. Instead, they run constantly within live production systems.

With technology as old and entrenched as backup, it helps to identify its value, and then ask whether we can get the same result in a better way. Backup accomplishes two distinct and crucial jobs. First, it captures a point-in-time version or snapshot of a data set. Second, it writes a copy of that point-in-time version of the data to a different system, location or preferably both. Finally, when IT wants to restore, IT must find the right version and then copy the data back to a fresh production system. When backup works, it protects us against point-in-time corruptions such as accidentally deleted files, ransomware attacks or complete system meltdowns.

Ensuring that backups restore as advertised, in my experience, requires periodic testing and a meticulous duplication of failures without affecting production systems. Many IT teams lack the resources or the cycles to make sure their backups are really functioning, and backups can fail in ways that are hard to detect. These failures may have no impact on a production system until something goes wrong. And when the backup fails to restore, everything goes wrong.

Modern data protection relies on a technology trifecta: virtualization, synchronization and replication. Together, they address the critical flaws in traditional backups.

  • Virtualization separates live, changing data from stable versions of that data.
  • Synchronization efficiently moves the changes from one stable version to the next to a replication core.
  • Replication then spreads identical copies of those versions across target servers distributed across multiple geographic locations.

Essentially, this describes the modern “cloud,” but these technologies have been around and evolving for decades.

Because this approach merges live data with protected versions of the data, it dramatically increases the utility and the overall reliability of the system. Operators can slice into the version stream of the data in order to fork a DevTest instance of their databases. They can create multiple live instances of the production front-end to synchronize files across geographies and provide instant recoverability to previous versions of a file. Most importantly, because modern data protection does not need to copy the data as a separate process, it eliminates the risk of this process failing inadvertently and silently. In this way, data protection becomes an integrated component of a healthy production system.

Two distinct approaches to this new breed of data protection have emerged: SAN and NAS. The SAN world relies on block-level virtualization and is dominated by companies like Actifio, Delphix and Zerto. With blocks, the name of the game is speed. The workloads tend to be databases and VMs, and the production system’s performance cannot be handicapped in any significant way. The SAN vendors can create point-in-time images of volumes at the block level and copy them to other locations. Besides providing data protection and recovery functionality, these technologies make it much easier to develop, upgrade and test databases before launching them into production. And while this ability to clone block volumes is compelling for DevTest, in the world of files, it changes everything.

The NAS world relies on file-level virtualization to accomplish the same goal. Files in the enterprise depend on the ability to scale, so it’s critical to support file systems that can exceed the physical storage footprint of a device. Vendors have used caching and virtualization in order to scale file systems beyond the limitations of any one hardware device. NAS contenders capture file-level versions and synchronize them against cloud storage—a.k.a object-storage—backends. Essentially, these are powerful file replication backends running across multiple geographic locations, operating as a service backed by the likes of Amazon, Microsoft and Google. The benefit of these systems is not only their unlimited scale but the fact that the files are being protected automatically. The file-versions are synchronized into the cloud storage core. The cloud, in this sense, takes the place of the inexpensive but ultimately unreliable traditional media for storing backups. This shift to cloud is not only more reliable, but it can dramatically reduce RPO (restore point objectives) from hours to minutes.

And there is more. Unlike SAN volumes, NAS file volumes can be active in more than one location at the same time. Think file sync & share, but at the scale of the datacenter and branch offices. This approach is already being used by media, engineering and architecture firms to collaborate on large projects across geographies. It can also simplify disaster recovery operations from one site to another, as any active-passive configuration is a more restricted, more basic version of these active-active NAS deployments.

We are entering a new era for data protection. Simply put, backup is ending, and, based on my experience as a CTO and the many conversations I’ve had with our customers, that is a good thing. We are moving away from a process that makes data restores a failure mode scenario to one where data is protected continuously. In doing so, we are not only taking the risk out of data protection. We are launching exciting new capabilities that make organizations more productive.

Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

Azure Resource Manager template reference now available

The content below is taken from the original (Azure Resource Manager template reference now available), to continue reading please visit the site. Remember to respect the Author & Copyright.

We have published new documentation for creating Azure Resource Manager templates. The documentation includes reference content that presents the JSON syntax and property values you need when adding resources to your templates.

Templates 1If you are new to Resource Manager and templates, see Azure Resource Manager overview for an introduction to the terms and concepts of Azure Resource Manager.

Simplify template creation by copying JSON directly into your template

The template reference documentation helps you understand what resource types are available, and what values to use in your template. It includes the API version number to use for each resource type, and all the valid properties. You simply copy the provided JSON into the resources section of your template, and edit the values for your scenario.

The property tables describe the available values.

Property Values

Find a resource type

You can easily navigate through the available types in the left pane. However, if you know the resource type, you can go directly to it with the following URL format:

http://bit.ly/2mTf8yR}

For example, the SQL database reference content is available at:

http://bit.ly/2nFpF4Y

show-navigation (002)

Please give us your feedback

The template reference content represents a new type of documentation for docs.microsoft.com. As you use it to build your templates, let us know how it can be improved. Please provide feedback about your experience.

The anatomy of a powerful desktop with an ARM chip

The content below is taken from the original (The anatomy of a powerful desktop with an ARM chip), to continue reading please visit the site. Remember to respect the Author & Copyright.

When he was growing up, a dream of Linux pioneer Linus Torvalds was to acquire the Acorn Archimedes, a groundbreaking personal computer with the first ARM RISC chips.

But in 1987, Archimedes wasn’t available to Torvalds in Finland, so he settled for the Sinclair QL. In the meanwhile, the Archimedes failed and disappeared from the scene, killing any chance for ARM chips to dominate PCs.

Since then, multiple attempts to put ARM chips in PCs have failed. Outside of a few Chromebooks, most PCs have x86 chips from Intel or AMD.

The domination of x86 is a problem for Linaro, an industry organization that advocates ARM hardware and software. Many of its developers use x86 PCs to compile programs for ARM hardware. That’s much like trying to write Windows programs on a Mac.

That fact doesn’t sit well with George Grey, CEO of Linaro.

“Linus mentioned this a little while ago: How do we get developers to work on ARM first? Why are will still using Intel tools?” Grey asked during a speech at this month’s Linaro Connect conference in Budapest.

A powerful Linux laptop or mini-desktop based on an ARM processor needs to built so developers can write and compile applications, he said.

“May be we can take a Chromebook design and put more memory, get upstream Linux support on it, and use it as a developer platform for developers to carry to conferences,” Grey said then.

To further that idea, a group of ARM hardware enthusiasts gathered in a room at Linaro Connect to conceptualize a powerful ARM PC. The group settled on building a computer like the Intel NUC — a mini-desktop with a powerful board computer in it.

The free-flowing session was entertaining, with attendees passionately sharing ideas on the chip, memory, storage, and other components in the PC.

The session, which is available on Linaro’s site, also highlighted issues involved in building and supporting an ARM-based PC. There were concerns about whether ARM chips would deliver performance adequate to run powerful applications.

There were also concerns about components and about providing a Linux user experience acceptable to users.

Also important was building a viable ARM PC that would attract hardware makers to participate in such an effort. One worry was the reaction of the enthusiast audience, who might sound off if an ARM desktop didn’t work properly, putting hardware vendors and chipmakers at the receiving end of criticism and bad press.

“Based on a research and efforts today, building an ideal PC is going to be hard,” said Yang Zhang, director of the technologies group at Linaro.

Attendees quickly agreed that the ARM PC would need an expandable x86-style board with DDR4 memory DIMM slot, and NVMe or SATA slots for plugging in SSDs or other drives. Other features would include gigabit slots and USB slots.

“Definitely, we need to be looking at something with real I/O, not some crappy mobile chipset with soldered-on 2GB of RAM,” one attendee said. (Attendees aren’t identified in the recording of the discussion.)

Many ARM-based computer boards like Raspberry Pi 3 and Pine64 can be used as PCs, but have limited expandability and components integrated on the board. They aren’t ideal for PCs handling heavy workloads.

Also, Zhang pointed out that LPDDR4, which is used in such “mobile” chipsets, is slower than DDR4 memory, which is why the DIMM slots would be needed on the ARM PC.

Next, the discussion shifted to the system-on-chip, and suggestions were made to use CPUs from companies including Marvell and Nvidia. Chips from Qualcomm, Cavium, and HiSilicon weren’t suggested because those companies were uninterested in building a PC-style computer for development with Linaro. Ironically, Qualcomm’s Snapdragon 835 will be used in Windows 10 PCs later this year.

An interesting suggestion was Rockchip’s RK3399, which is being used in Samsung’s Chromebook Pro, which has PCI-Express and USB 3.0. Google and Samsung have been putting in a decent amount of work for Linux support on the chip. But it still is a mobile chip, and not designed for full-powered ARM desktop.

“I have a 24-core Opteron right. To replace that I would need a 64-core Cortex A73 or something, which doesn’t exist,” said the attendee who suggested the RK3399.

The discussion became a battle between server chips and mobile chips, which each had their issues. While the server chips boast good software support, they are expensive. The mobile chips are cheap but have poor Linux OS support. Software support would need to be added by independent developers, and that can be a considerable amount of work.

In 2015, 96boards — the ARM hardware effort of Linaro — built a development board called HuskyBoard wth AMD’s Opteron A1100 server chip, but that didn’t go well. AMD has now abandoned ARM server chips and recently released the 32-core Naples chip based on its x86 Zen architecture.

The initial PC will perhaps have a server chip with decent Linux kernel support. Standard interfaces, sufficient memory, and decent graphics will matter more, as will ensuring that standard components like heatsinks and memory DIMMs can be bought off the shelf.

The purpose of the gathering was to get the ball rolling for the development of a real desktop based on ARM. The PC will likely be developed by 96boards, which provides specifications to build open-source development boards.

Join the Network World communities on

Facebook

and

LinkedIn

to comment on topics that are top of mind.

Sensor-laden fake fruit ensures you get fresh produce

The content below is taken from the original (Sensor-laden fake fruit ensures you get fresh produce), to continue reading please visit the site. Remember to respect the Author & Copyright.

It’s challenging for shippers to check the quality of fruit on its way to your grocery store. Most sensors won’t reflect the conditions inside the boxes, and plucking a sample isn’t going to give you a comprehensive look. That’s where some Swiss researchers might come to the rescue. They’ve created artificial, sensor-packed fruit whose composition is enough like the real thing to provide an accurate representation of temperatures when placed alongside real food. If the fruit in the middle of a delivery isn’t properly refrigerated, the shipping company would know very quickly.

To create the fruit, the team first X-rays a real example and then has an algorithm generate the average shape and texture to produce a 3D-printed shell. From there, the team fills the shell with simulated fruit flesh made of carbohydrates, polystyrene and water. The result is obviously unnatural (and accordingly inedible), but realistic enough to produce accurate results in early testing.

And importantly, this would be relatively cheap. An entire fake fruit would cost about $50 US, and you could reuse it many times. The trickiest part would be getting real-time data. Right now, you have to stop and check the sensors to get results. The current design isn’t equipped to wirelessly transmit data, so there’s no way to get an instant notice while the foodstuffs are in mid-route. Even so, the tech could more than pay for itself if it helps produce companies avoid mistakes and deliver healthier produce to your local shop.

Via: TechCrunch

Source: Empa

Google Cloud Platform gets IPv6 support

The content below is taken from the original (Google Cloud Platform gets IPv6 support), to continue reading please visit the site. Remember to respect the Author & Copyright.

http://bit.ly/2mtAyqY

Pwn2Own hacking contest ends with two virtual machine escapes

The content below is taken from the original (Pwn2Own hacking contest ends with two virtual machine escapes), to continue reading please visit the site. Remember to respect the Author & Copyright.

Two teams of researchers managed to win the biggest bounties at this year’s Pwn2Own hacking contest by escaping from the VMware Workstation virtual machine and executing code on the host operating system.

Virtual machines are in used in many scenarios to create throw-away environments that pose no threat to the main operating system in case of compromise. For example, many malware researchers execute malicious code or visit compromise websites inside virtual machines to observe their behavior and contain their impact.

One of the main goals of hypervisors like VMware Workstation is to create a barrier between the guest operating system that runs inside the virtual machine and the host OS where the hypervisor runs. That’s why VM escape exploits are highly prized, more so than browser or OS exploits.

This year, the organizers of Pwn2Own, an annual hacking contest that runs during the CanSecWest conference in Vancouver, Canada, offered a prize of US$100,000 for breaking the isolation layer enforced by the VMware Workstation or Microsoft Hyper-V hypervisors.

Friday, on the third and final day of the contest, two teams stepped up to the challenge; both of them from China.

Team Sniper, made up of researchers from the Keen Lab and PC Manager divisions of internet services provider Tencent, chained together three vulnerabilities to escape from the guest OS running inside VMware Workstation to the host OS.

The other team, from the security arm of Qihoo 360, achieved an even more impressive attack chain that started with a compromise of Microsoft Edge, moved to the Windows kernel, and then escaped from the VMware Workstation virtual machine. They were awarded $105,000 for their feat.

The exploit scenarios were difficult to begin with, because attackers had to start from a non-privileged account on the guest OS, and the VMware Tools, a collection of drivers and utilities that enhance the virtual machine’s functionality, were not installed. VMware Tools would have probably provided more attack surface had they been present.

Also on the third day, researcher Richard Zhu successfully hacked Microsoft Edge, complete with a system-level privilege escalation that earned him $55,000. It was fifth Microsoft Edge exploit demonstrated during the competition.

Apple’s Safari fell four times, Mozilla Firefox once, but Google Chrome remained unscathed. Researchers also demonstrated two exploits for Adobe Reader and two for Flash Player, both with sandbox escapes. The contest also included many privilege escalation exploits on Windows and macOS.

The Qihoo 360 team won the most number of points and were crowned Master of Pwn for this year’s edition. It was followed by Tencent’s Team Sniper and a team from the security research lab of China-based Chaitin Technology.

The researchers have to share their exploits with security vendor Trend Micro, the contest’s organizer, which then reports them to the affected software vendors.

Join the Network World communities on

Facebook

and

LinkedIn

to comment on topics that are top of mind.

Azure Site Recovery available in five new regions

The content below is taken from the original (Azure Site Recovery available in five new regions), to continue reading please visit the site. Remember to respect the Author & Copyright.

To increase our service global footprint, we recently announced the expansion of Azure Site Recovery to Canada and UK regions. Apart from these two new countries, we have also deployed our service in West US2, making it available to all non-government Azure regions in the United States.

With this expansion, Azure Site Recovery is now available in 27 regions worldwide including Australia East, Australia Southeast, Brazil South, Central US, East Asia, East US, East US2, Japan East, Japan West, North Europe, North Central US, Southeast Asia, South Central US, West Central US, West US2, US Gov Virginia, US Gov Iowa, West Europe, West US, North East China, East China, South India,Central India, UK South, UK West, Canada East, and Canada Central​.

Map

Customers can now select any of the above regions to deploy ASR. Irrespective of the region you choose to deploy in, ASR guarantees the same reliability and performance levels as set forth in the ASR SLA. To learn more about Azure Site Recovery visit Getting started with Azure Site Recovery. For more information about the regional availability of our services, visit the Azure Regions page.