Automate the Freight: Drones Across the Sea

The content below is taken from the original (Automate the Freight: Drones Across the Sea), to continue reading please visit the site. Remember to respect the Author & Copyright.

When you think about which of the many technological advances of the 20th century had the most impact on the global economy, which one would you rank as the most important? Would it be the space program, which gave rise to advances in everything from communications satellites to advanced composite materials? Or would it be the related aerospace industry, which stitched the world together so tightly that you can be almost anywhere on the planet within 24 hours? Or perhaps it’s the Internet, the global platform for buying almost anything from almost anyone.

Those are all important, but for the most economically impactful technology of the 20th century, I’d posit that the lowly shipping container and the containerized cargo industry that grew around it win, hands down.

How could an almost technology-free steel box compete with the bells and whistles of spacecraft, jet airliners, and a global network of computers? When you think about it, moving goods from point A to point B is one of the key tasks in a global economy. And when your globe’s surface is 70% water, moving goods by ship is something that you need to be really good at. Until the mid-1950s, almost every ship was loaded by hand, with boxes and crates winched from dockside up into holds, where it was arranged by stevedores who as often as not helped themselves to whatever they wanted. Ships took weeks to load, cargoes were relatively small, and shipping was expensive.

By Photocapy (Flickr) [CC BY-SA 2.0], via Wikimedia Commons

The standardized intermodal shipping container changed all that — cargo was now fairly secure and handled in bulk by cranes. Ships could be loaded and unloaded quickly, turning weeks dockside to hours. Ships have become enormous as a result, and the reason you can order a widget on Ali Express or eBay and have it show up pretty quickly is because it got crammed into a container in China and crossed an ocean along with thousands of other of these lowly steel boxes, each full of something someone is convinced they want.

Land, Air, and Sea

In my earlier article about automating freight, I focused mainly on land-based, long-haul freight shipped by driverless trucks. But I also touched on automating the ships that ply our oceans. Most container ships these days have barely a dozen crewmembers aboard. Technology has automated away most of the jobs on a ship, and little stands in the way of complete automation.

I have little doubt that day will come, but there’s a problem with this mode of freight: ships are slow. The modern container fleet averages about 15 knots, meaning that a crossing of the Pacific can take something like three weeks. That’s an amazingly short trip compared to the middle of the last century, but it might be too long for some kinds of shipments — produce, for instance. Yes, you can ship across the ocean by standard air freight, but at a high premium compared to surface shipping.

Could there be another way? San Jose, CA-based startup Natilus thinks so, and they’re working on autonomous freight aircraft to ply the same routes that container ships currently dominate. They have ambitious plans: 200-foot long UAVs that will tote 100 tons of freight across the Pacific in 30 hours or less. The company and the concept appear to be in their infancies now, but they plan a test of a 30-foot scale model of their freighter this summer in San Francisco.

But in this day and age of self-crashing cars and fears of drone shootdowns, what makes Natilus think they’ll be allowed to fly a drone the size of a Boeing 777 above population centers? Here’s the clever bit: they won’t. Natilus intends to fly their drones completely over the ocean. What’s more, the drones won’t even use airports; they’ll be seaplanes, and will land, unload, load, and take off at or near seaports. There’s obviously going to be an efficiency hit compared to container ships, since cargo will need to be handled more than once. But if Natilus figures out how to leverage the venerable 20-foot shipping container format, my guess is the loss of efficiency will be more than covered by a 2000% faster transit time.

Obviously, Natilus and its eventual competitors have a huge number of problems to solve. Surprisingly, I think the drone part of the equation isn’t one of them — we’ve already got a pretty good idea how to make big UAVs and fly them safely over land. I think the problems lie more with the infrastructure that’ll need to be in place on both ends of the journey. Seaplane landings are no trivial matter, and even without passengers to fill up the “For Discomfort” bags on a rough landing, the cargo and the plane itself will still require some pretty smooth water to use as a runway. There will also need to be automated barges to ferry cargo to and from the docks, facilities both at sea and on land to build, and a thousand regulatory minefields to cross.

As the saying goes, whatever the laws of physics don’t specifically prohibit is just an engineering problem, and in this case, I’ll bet that economic forces will overcome the technical issues and provide us with much more affordable overnight overseas deliveries. And if it happens, it’ll be due to innovative thinking and automating away the problems.

Azure IoT Suite connected factory now available

The content below is taken from the original (Azure IoT Suite connected factory now available), to continue reading please visit the site. Remember to respect the Author & Copyright.

Getting Started with Industrie 4.0

Many customers tell us that they want to start with the digital transformation of their assets, for example production lines, as well as their business processes. However, many times they just don’t know where to start or what exactly Industrie 4.0 is all about. At Microsoft, we are committed to enabling businesses of all sizes to realize their full potential and today we are proud to announce our connected factory preconfigured solution and six-step framework to quickly enable you to get started on your Industrie 4.0 journey.

Azure IoT Suite preconfigured solutions are engineered to help businesses get started quickly and move from proof-of-concept to broader deployment. The connected factory preconfigured solution leverages Azure services including Azure IoT Hub and the new Azure Time Series Insights. Furthermore, it leverages the OPC Foundation’s cross-platform OPC UA Net Standard Library reference stack for OPC UA connectivity as well as OPC DA and other OPC Classic protocols via the use of the included wrapper, as well as a rich web portal with OPC UA server management capabilities, alarms processing and telemetry visualizations. The web portal and the Azure Time-Series Insights can be used to quickly see trends in OPC UA telemetry data and see Overall Equipment Effectiveness (OEE) and several key performance indicators (KPIs) like number of units produced and energy consumption.

This solution builds on the industry-leading cloud connectivity for OPC UA that we have first announced at Hannover Messe a year ago. Since then, all components of this connectivity have been released cross-platform and open-source on GitHub in collaboration with the OPC Foundation making Microsoft the largest open-source contributor to the OPC Foundation. Furthermore, the entire connected factory preconfigured solution is also published open-source on GitHub.

Azure IoT Suite is the best solution for Industrie 4.0

As we demonstrated at Hannover Messe 2016, we believe that the Azure IoT Suite is the best choice for businesses to cloud-enable industrial equipment — including already deployed machines, without disrupting their operation — to allow for data and device management, insights, machine learning capabilities and even the ability to manage equipment remotely.

To demonstrate this functionality, we have gone to great lengths to build real OPC UA servers into the solution, grouped into assembly lines where each OPC UA server is responsible for a “station” within the assembly line. Each assembly line is producing simulated products. We even built a simple Manufacturing Execution System (MES) with an OPC UA interface, which controls each assembly line. The connected factory preconfigured solution includes 8 such assembly lines and they are running in a Linux Virtual Machine on Azure. Our Azure IoT Gateway SDK is also used in each simulated factory location.

Secure by design, secure by default

As verified by the BSI Study, OPC UA is secure by default. Microsoft is going one step further and is making sure that the OPC UA components used in the connected factory solution are secure by default, to give you a secure base to build your own solution on top. Secure by default means that all security features are turned on and already configured. This means that you don’t need to do this step manually and sees how an end-to-end solution can be secured.

Easy to extend with real factories

We have made it as simple as possible to extend the connected factory preconfigured solution with real factories. For this, we have partnered with several industry leaders in the OPC UA ecosystem who have built turnkey gateway solutions that have the Azure connectivity used by this solution already built in and are close to zero-config. These partners include Softing, Unified Automation, and Hewlett Packard Enterprise. Please visit our device catalog for a complete list of gateways compatible with this solution. With these gateways, you can easily connect your on-premises industrial assets to this solution.

However, we have gone even further and additionally provided open-source Docker containers as well as pre-built Docker container images available on Docker Hub for the Azure connectivity components (OPC Proxy and OPC Publisher), both integrated in the Azure IoT Gateway SDK and available on GitHub to make a PoC with real equipment achievable in hours, enabling you to quickly draw insights from your equipment and to plan commercialization steps based on these PoCs.

The future is now

Get started on the journey to cloud-enable industrial equipment with Azure IoT Suite connected factory preconfigured solution and see the solution in action at Hannover Messe 2017. To learn more about how IoT can help transform your business, visit http://bit.ly/2oUPiNi.

Learn more about Microsoft IoT

Microsoft is simplifying IoT so every business can digitally transform through IoT solutions that are more accessible and easier to implement. Microsoft has the most comprehensive IoT portfolio with a wide range of IoT offerings to meet organizations where they are on their IoT journey, including everything businesses need to get started — ranging from operating systems for their devices, cloud services to control them, advanced analytics to gain insights, and business applications to enable intelligent action. To see how Microsoft IoT can transform your business, visit http://bit.ly/2oUPiNi.​

A Call for IoT Standards

The content below is taken from the original (A Call for IoT Standards), to continue reading please visit the site. Remember to respect the Author & Copyright.

Joonho Park<br/>The Open Connectivity Foundation

Joonho Park

The Open Connectivity Foundation

Joonho Park is Executive Director of The Open Connectivity Foundation.

As anticipation for the Internet of Things has blossomed, so have misgivings and fears about its security vulnerabilities. Several high profile incidents, particularly the Mirai episode, have raised questions about the security risks posed by proliferating devices connected to the Internet. As companies and consumers continue their march into the brave new world of IoT, addressing these concerns will be essential.

Industry-wide standards and certifications are a solution with many obvious benefits; vendors and IoT experts can craft them with an eye for security and allaying customer concerns. These conclusions are backed by a survey conducted by the Open Connectivity Foundation (OCF), which shows both widespread concerns over IoT security and clear support for an industry standardization approach. Indeed, respondents viewed standards implementation and vendor cooperation as an effective way to address ease of use, interoperability and security concerns. Sixty percent of respondents indicated that they were more likely to purchase a connected device with some form of a security certification, a clear sign that standards and certifications would effectively improve customer faith in IoT security.

IoT security dramatically strode into the national spotlight last September with the arrival of Mirai. The now notorious malware finds and infects various IoT devices, assembles them into a centrally controlled botnet, and launches their traffic at targeted websites in massive DDoS attacks. Mirai’s inaugural assault was aimed at Dyn, a DNS service provider essential to the running of a multitude of different websites. The resulting downages were so widespread that a common refrain heard in news coverage was that Mirai had “broke the internet”, heralding an “IoT-pocalypse”. Subsequent reporting and analysis has continued to highlight the security vulnerabilities of IoT.

Trouble on the Horizon?

These anxieties are mainstreaming. Security concerns are considered the second highest barrier to IoT adoption, and improvements to device security the second most desired product change from IoT vendors. As connected device usage becomes universal and high profile security breaches continue to receive national coverage, these concerns could form the bedrock of a crisis in consumer confidence that would blowback on different vendors and the industry as a whole.

The development of industry standards and certifications is the most effective and relatively straightforward response to IoT security concerns. By cooperating together, vendors can establish benchmarks for connected devices; these would cover everything from infrastructure to data protocols to security features. Such standardization would ensure that connected devices would have baseline security protocols in place. Product certifications or security ratings would be the next step. Vendors could signal to customers that the devices they purchase are up to agreed upon industry standards. Adopting these initiatives would be a simple yet effective method to both tackle security shortfalls and allay consumer concerns raised by those shortfalls.

Fortunately for IoT vendors, the new is not all negative. There is a widespread desire for connected devices; some 80 percent of respondents from the OCF study said that they planned to buy a connected device within six months.  Less than (8 percent) said that they currently had no connected devices. These responses are a clear sign of how pervasive IoT technology already is and how the market is set to continue its growth. However, an increase in the number of connected devices will exacerbate security problems if industry standards are not in place.

The distinct characteristics of connected devices, especially infrequent interactions from users, make them uniquely vulnerable to infection and manipulation by malicious actors. Traditional targets, such as personal PCs, are commonly interfaced with, and performance issues can tip off users that there is something wrong.  In contrast, many connected devices, such as routers, sensors and cameras, are designed to operate without regular check-ins. Once attacked, they may not show any noticeable signs of infection, and will sit unrepaired or replaced. Baseline security standards are clearly a necessary measure, as vendors can’t count on user intervention to identify potential problems.  Taken in conjunction with the obvious support, these technical considerations provide a compelling case for introduction of common industry standards and associate certification programs. This is one of the more effective mechanisms available to address the security vulnerabilities of our IoT future and restore confidence in the industry.

The development of IoT standards and certifications is not only desirable from a security standpoint. The most commonly cited barrier to IoT adoption is interoperability; common industry standards would make the goals of device compatibility much more realistic. As such, standards would not only be a reactive response to security worries, but a springboard to developing features that customers want. However, vendors should consider security standards and certifications to be an immediate priority necessary to plugging security holes and buttressing consumer confidence.

Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

 

Fully functional Windows 3.1 in WebVR (happy 25th bday!)

The content below is taken from the original (Fully functional Windows 3.1 in WebVR (happy 25th bday!)), to continue reading please visit the site. Remember to respect the Author & Copyright.

http://bit.ly/2pVpSOW

Don’t get bit by zombie cloud data

The content below is taken from the original (Don’t get bit by zombie cloud data), to continue reading please visit the site. Remember to respect the Author & Copyright.

The internet never forgets, which means data that should have been deleted doesn’t always stay deleted. Call it “zombie data,” and unless your organization has a complete understanding of how your cloud providers handle file deletion requests, it can come back to haunt you.

Ever since the PC revolution, the concept of data deletion has been a bit misunderstood. After all, dragging a file to the Recycle Bin simply removed the pointer to the file, freeing up disk space to write new data. Until then, the original data remained on the disk, rediscoverable using readily accessible data recovery tools. Even when new data was written to that disk space, parts of the file often lingered, and the original file could be reconstructed from the fragments.

Desktop—and mobile—users still believe that deleting a file means the file is permanently erased, but that’s not always the case. That perception gap is even more problematic when it comes to data in the cloud.

Cloud service providers have to juggle retention rules, backup policies, and user preferences to make sure that when a user deletes a file in the cloud, it actually gets removed from all servers. If your organization is storing or considering storing data in the cloud, you must research your service provider’s data deletion policy to determine whether it’s sufficient for your needs. Otherwise, you’ll be on the hook if a data breach exposes your files to a third party or stuck in a regulatory nightmare because data wasn’t disposed of properly.

With the European Union General Data Protection Regulation expected to go into effect May 2018, any company doing business in Europe or with European citizens will have to make sure they comply with rules for removing personal data from their systems—including the cloud—or face hefty fines.

Data deletion challenges in the cloud

Deleting data in the cloud differs vastly from deleting data on a PC or smartphone. The cloud’s redundancy and availability model ensures there are multiple copies of any given file at any given time, and each must be removed for the file to be truly deleted from the cloud. When a user deletes a file from a cloud account, the expectation is that all these copies are gone, but that really isn’t the case.

Consider the following scenario: A user with a cloud storage account accesses files from her laptop, smartphone, and tablet. The files are stored locally on her laptop, and every change is automatically synced to the cloud copy so that all her other devices can access the most up-to-date version of the file. Depending on the cloud service, previous file versions may also be stored. Since the provider wants to make sure the files are always available for all devices at all times, copies of the file live across different servers in multiple datacenters. Each of those servers are backed up regularly in case of a disaster. That single file now has many copies.

“When a user ‘deletes’ a file [in the cloud], there could be copies of the actual data in many places,” says Richard Stiennon, chief strategy officer of Blancco Technology Group.

Deleting locally and in the user account simply takes care of the most visible version of the file. In most cases, the service marks the file as deleted and removes it from view but leaves it on the servers. If the user changes his or her mind, the service removes the deletion mark on the file, and it’s visible in the account again.

In some cases, providers adopt a 30-day retention policy (Gmail has a 60-day policy), where the file may no longer appear in the user’s account but stay on servers until the period is up. Then the file and all its copies are automatically purged. Others offer users a permanent-delete option, similar to emptying the Recycle Bin on Windows.

Service providers make mistakes. In February, forensics firm Elcomsoft found copies of Safari browser history still on iCloud, even after users had deleted the records. The company’s analysts found that when the user deleted their browsing history, iCloud moved the data to a format invisible to the user instead of actually removing the data from the servers. Earlier, in January, Dropbox users were surprised to find files that had been deleted years ago reappearing in their accounts. A bug had prevented files from being permanently deleted from Dropbox servers, and when engineers tried to fix the bug, they inadvertently restored the files.

The impact for these incidents was limited—in Dropbox’s case, the users saw only their files, not other people’s deleted files—but they still highlight how data deletion mistakes can make organizations nervous.

There are also cases in which the user’s concept of deletion doesn’t match the cloud provider’s in practice. It took Facebook more than three years to remove from public view photographs that a user had deleted back in 2009; even then, there was no assurance, given that the photographs aren’t still lurking in secondary backups or cloud snapshots. There are stories of users who have removed their social media accounts entirely and find the photos they’ve shared remain accessible to others.

Bottom line, between backups, data redundancy, and data retention policies, it’s tricky to assume that data is ever completely removed from the cloud.

What deleting data from the cloud looks like

Stiennon declined to speculate on how specific cloud companies handle deleting files from archives but said that providers typically store data backups and disaster recovery files in the cloud and not as offsite tape backups. In those situations, when a file is deleted from the user’s account, the pointers to the file in the backup get removed, but the actual files remain in that blob. While that may be sufficient in most cases, if that archive ever gets stolen, the thief would be able to forensically retrieve the supposedly deleted contents.

“We know that basic deletion only removes pointers to the data, not the data itself, and leaves data recoverable and vulnerable to a data breach,” says Stiennon.

Some service providers wipe disks, Stiennon says. Typically in those situations, when the user sends a deletion command, the marked files are moved to a separate disk. The provider relies on normal day-to-day operations to overwrite the original disk space. Considering there are thousands of transactions per day, that’s a reasonable assumption. Once the junk disk is full or the retention time period has elapsed, the provider can reformat and degauss the disk to ensure the files are truly erased.

Most modern cloud providers encrypt data stored on their servers. While some ahead-of-the-game providers encrypt data with the user’s private keys, most go with their own keys, frequently the same one to encrypt data for all users. In those cases, the provider might remove the encryption key and not even bother with actually erasing the files, but that approach doesn’t work so well when the user is trying to delete a single file.

Here’s another reason to be paranoid in the likely event that not every copy of a file gets scrubbed from the cloud: There are forensics tools capable of looking into cloud services and recovering deleted information. Elcomsoft used such a tool on iCloud to find the deleted browser history, for example. Knowing that copies of deleted files exist somewhere in the cloud, the question becomes: How safe are these orphaned copies from government investigators and other snoops?

The bits left behind

Research has shown that companies struggle to properly dispose of disks and the data stored on them. In a Blancco Technology Group research, engineers purchased more than 200 drives from third-party sellers and found personal and corporate data could still be recovered, despite previous attempts to delete it. A separate Blancco Technology Group survey found that one-third of IT teams reformat SSDs before disposing them but don’t verify that all the information has been removed.

“If you do not overwrite the data on the media, then test to see if it has been destroyed, you cannot be certain the data is truly gone,” Stiennon says.

While there have always been concerns about removing specific files from the cloud, enterprise IT teams are only now beginning to think about broader data erasure requirements for cloud storage. Many compliance regimes specify data retention policies in years, ranging from seven years to as long as 25 years, which means early cloud adopters are starting to think about how to remove the data that, per policy, now have to be destroyed.

GDPR is also on the way, with its rules that companies must wipe personal data belonging to EU residents from all its systems once the reasons for having the data expire. Thus, enterprises have to make sure they can regularly and thoroughly remove user data. Failure to do so can result in fines of up to 4 percent of a company’s global annual revenue.

That’s incentive, right there, for enterprises to make sure they are in agreement with their service providers on how to delete data.

How to protect your organization from “zombie” cloud data

Given these issues, it’s imperative that you ask to see your service provider’s data policy to determine how unneeded data is removed and how your provider verifies that data removal is permanent. Your service-level agreement needs to specify when files are moved and how all copies of your data are removed. A cloud compliance audit can review your storage provider’s deletion policies and procedures, as well as the technology used to protect and securely dispose of the data.

Considering all the other details to worry about in the cloud, it’s easy to push concerns about data deletion aside, but if you can’t guarantee that data you store in the cloud is effectively destroyed when needed, your organization will be out of compliance. And if supposedly deleted data is stolen from the cloud—or your storage provider mistakenly exposes data that should have been already destroyed—your company will ultimately pay the price.

“It’s more of a false sense of security than anything else when the wrong data removal method is used,” Stiennon says. “It makes you think the data can never be accessed, but that’s just not true.”

This story, “Don’t get bit by zombie cloud data” was originally published by
InfoWorld.

Join the Network World communities on

Facebook

and

LinkedIn

to comment on topics that are top of mind.

UK driving tests to include sat nav skills from December

The content below is taken from the original (UK driving tests to include sat nav skills from December), to continue reading please visit the site. Remember to respect the Author & Copyright.

In biggest shake-up of the standardised driving exam since the introduction of the theory test, UK drivers will be required to demonstrate that they can navigate using a sat nav. The Driver & Vehicle Standards Agency has confirmed that from December 4th, learners will be required to drive independently for 20 minutes — double the current length — with four out of every five candidates being asked to follow directions displayed on a navigational device.

The agency says that drivers won’t be required to bring their own sat nav, nor will they be tasked with setting it up. They’ll also be able to ask for clarification of the route if they’re not sure, and it won’t matter if the wrong route is taken, as long as it doesn’t put other road users at risk.

The DVSA noted back in June 2016 that the introduction of technology element would likely improve safety, boost driver confidence and widen potential areas for practical tests. "Using a satnav goes some way to addressing concerns that inexperienced drivers are easily distracted, which is one of the main causes of crashes. We’re moving with technology and the technology that new drivers will be using," an agency spokesperson said in a statement.

Other changes to the test include the removal of the "reverse around a corner" and "turn-in-the-road" manoeuvres, which will be replaced with parallel parking, parking in a bay and a stop and go test on the side of the road. Examiners will also ask drivers two vehicle safety "show me, tell me" questions. One will be asked before setting off, while the other will be need to be answered while on the road.

Source: Gov.uk

Amazon Lex – Now Generally Available

The content below is taken from the original (Amazon Lex – Now Generally Available), to continue reading please visit the site. Remember to respect the Author & Copyright.

During AWS re:Invent I showed you how you could use Amazon Lex to build conversational voice & text interfaces. At that time we launched Amazon Lex in preview form and invited developers to sign up for access. Powered by the same deep learning technologies that drive Amazon Alexa, Amazon Lex allows you to build web & mobile applications that support engaging, lifelike interactions.

Today I am happy to announce that we are making Amazon Lex generally available, and that you can start using it today! Here are some of the features that we added during the preview:

Slack Integration – You can now create Amazon Lex bots that respond to messages and events sent to a Slack channel. Click on the Channels tab of your bot, select Slack, and fill in the form to get a callback URL for use with Slack:

Follow the tutorial (Integrating an Amazon Lex Bot with Slack) to see how to do this yourself.

Twilio Integration – You can now create Amazon Lex bots that respond to SMS messages sent to a Twilio SMS number. Again, you simply click on Channels, select Twilio, and fill in the form:

To learn more, read Integrating an Amazon Lex Bot with Twilio SMS.

SDK Support – You can now use the AWS SDKs to build iOS, Android, Java, JavaScript, Python, .Net, Ruby, PHP, Go, and C++ bots that span mobile, web, desktop, and IoT platforms and interact using either text or speech. The SDKs also support the build process for bots; you can programmatically add sample utterances, create slots, add slot values, and so forth. You can also manage the entire build, test, and deployment process programmatically.

Voice Input on Test Console – The Amazon Lex test console now supports voice input when used on the Chrome browser. Simply click on the microphone:

Utterance Monitoring – Amazon Lex now records utterances that were not recognized by your bot, otherwise known as missed utterances. You can review the list and add the relevant ones to your bot:

You can also watch the following CloudWatch metrics to get a better sense of how your users are interacting with your bot. Over time, as you add additional utterances and improve your bot in other ways, the metrics should be on the decline.

  • Text Missed Utterances (PostText)
  • Text Missed Utterances (PostContent)
  • Speech Missed Utterances

Easy Association of Slots with Utterances – You can now highlight text in the sample utterances in order to identify slots and add values to slot types:

Improved IAM Support – Amazon Lex permissions are now configured automatically from the console; you can now create bots without having to create your own policies.

Preview Response Cards – You can now view a preview of the response cards in the console:

To learn more, read about Using a Response Card.

Go For It
Pricing is based on the number of text and voice responses processed by your application; see the Amazon Lex Pricing page for more info.

I am really looking forward to seeing some awesome bots in action! Build something cool and let me know what you come up with.

Jeff;

 

readycloud (1.15.1292.517)

The content below is taken from the original (readycloud (1.15.1292.517)), to continue reading please visit the site. Remember to respect the Author & Copyright.

ReadyCLOUD gives you remote access over the Internet to a USB storage device that is connected to your router’s USB port.

Humans are (still) the weakest cybersecurity link

The content below is taken from the original (Humans are (still) the weakest cybersecurity link), to continue reading please visit the site. Remember to respect the Author & Copyright.

Humans remain the weak link in corporate data protection, but you might be surprised hat it isn’t only rank-and-file employees duped by phishing scams who pose risks. Some companies are lulled into a false sense of cybersecurity by vendors. You read that right:Some enterprises believe the shiny new technologies they’ve acquired will protect them from anything.

Just ask Theodore Kobus, leader of BakerHostetler’s Privacy and Data Protection team.

ted kobus BakerHostetler

Theodore Kobus, BakerHostetler’s Privacy and Data Protection team.

While Kobus was conducting an educational workshop on endpoint monitoring, an employee for a large company mentioned a tool that it had deployed to watch over computing devices connected to the corporate network. Kobus told him the move was great because it will help speed up the time it takes to detect an incident. The employee pushed back and said, “No, it’s much more than that; it’s going to stop these attacks.”

Taken aback by the staff’s confidence in a single tool, Kobus explained the inherent dangers in believing cybersecurity technologies, no matter their speedy detection capabilities, are fool-proof.

bakerhostetler 1 BakerHostetler

Are you ready? (Click for larger image.)

“We talked things through and they realized — because they weren’t really thinking at the time — that zero-day attacks are not going to be blocked by what they have in place and they need to understand what the tools are used for,” says Kobus, whose team has helped enterprises address 2,000 breaches in the past five years. “That’s a big problem that we’re seeing. Companies really need to focus on the key issues to help stop these attacks from happening in the first place.”

The anecdote underscores just how vulnerable companies are to attacks despite instituting proper protections, says Kobus, who explored the points in BakerHostetler’s 2017 Data Security Incident Response Report, which incorporated data from the 450 breaches his team worked on in 2016. Companies surveyed ranged from $100 million to $1 billion in revenues across health care, retail, hospitality, financial services, insurance and other sectors.

Phishing, human error and ransomware, oh my!

At 43 percent, phishing, hacking and malware incidents accounted for most incidents for the second year in a row, a 12 percentage-point jump from the firm’s incident response report in 2015. Thirty-two percent of incidents were initiated by human error, while 25 percent of attacks involved phishing and 23 percent were initiated via ransomware. Another 18 percent of comprises occurred due to lost or stolen devices and three percent reported internal theft.

Phishing is particularly difficult to stop, Kobus says, because digital natives — those who grew up accustomed to the rapid-fire response cadence of social media are programmed to answer emails from their coworkers quickly. Accordingly, many fall prey to business email comprises that appear to come from their CEO, CFO or another peer but in reality include a malicious payload.

“Phishing scams are never going to go away,” Kobus says. “No matter what technology we put in place, no matter how much money we spend on protections for the organization, we still have people and people are fallible.” With the rise of such social engineering attacks, Kobus says it’s important for IT leaders to caution employees to slow down, stop and consider such emails and either walk down the hall or phone to ask a colleague if they sent the email.

Ransomware attacks – in which perpetrators introduce malware that prevents or limits you from accessing your system until a ransom is paid – have increased by 500 percent year-over-year, with BakerHostetler responding to 45 such incidents in 2016. Ransomware scenarios range from sophisticated parties that break into the network and then broadly deploy ransomware to hundreds of devices, while others are carried out by rookies who bought a ransomware kit. BakerHostetler saw several demands in excess of $25,000, almost all of which called for payment via Bitcoin.

But most companies took several days to create and fund their Bitcoin wallet to pay the perpetrator(s), says Kobus, who added that ransomware incidents will probably increase over the short term because companies have proven unable to manage let alone prevent them.

Cybersecurity programs need work

The report findings suggest enterprises have more work to do with regard to shoring up their cybersecurity practices. Kobus, whose team of 40 conducts 75 “table-top exercises” involving incident response with corporations each year, says that companies are better-served by going back to the basics, starting with proper training and planning of cyber defenses rather than rushing out to buy the shiniest new technology on the market.

Companies should, for example, teach their workforce what phishing scams look like and pepper employees with fake phishing emails to test readiness. Other basic security measures include implementing multifactor authentication to remotely access any part of the company’s network or data; creating a forensics plan to quickly initiate a cybersecurity investigation; building business continuity into the incident response plan to ensure systems remain stable; vetting the technical ability, reputation and financial solvency of vendors; deploying off-site or air-gapped back-up systems in the event of ransomware; and acquiring the appropriate cyber insurance policy.

There is no one-size-fits-all approach to cybersecurity readiness. It invariably requires an enterprise-wide approach tailored to the culture and industry of the company, accounting for regulatory requirements. And in the event of a breach, communication and transparency to consumers is paramount, Kobus says.

“It’s really about getting in there and helping them manage the breach,” says Kobus, adding that includes working with security forensics and corporate communications teams to craft the right messaging. “The goal is to communicate in a transparent, thoughtful and meaningful way. You want to be able to answer the basic questions the consumers want answered: What happened? How did it happen? What are you doing to protect me? What are you doing to stop this from happening in the future?”

This story, “Humans are (still) the weakest cybersecurity link ” was originally published by
CIO.

Join the Network World communities on

Facebook

and

LinkedIn

to comment on topics that are top of mind.

You just sent an on-prem app to the cloud and your data centre has empty racks. What now?

The content below is taken from the original (You just sent an on-prem app to the cloud and your data centre has empty racks. What now?), to continue reading please visit the site. Remember to respect the Author & Copyright.

On-premises data centres are expensive to build and operate, which is one reason public cloud is so attractive … albeit not so attractive that organisations will immediately evacuate on-premises data centres.

Indeed, it’s accepted that workloads will migrate over years and that hybrid clouds will become very common. But it’s clear that data centres built for pre-cloud requirements are unlikely to be full in future.

Which raises the question of what to do with empty space in a perfectly good data centre full of expensive electrical and cooling kit.

The Register‘s asked around and deduced you have a few options.

One is doing literally doing nothing other than unplugging kit that’s no longer in use. This isn’t a mad idea because your on-premises data centre was designed to cool certain kit in certain places. Leaving decommissioned devices in place won’t disrupt those cooling concoctions. It can also save you the cost of securely destroying devices. Those devices will also be housed in a known secure environment.

Another idea advanced by Tom Anderson, Vertiv’s* ANZ datacentre infrastructure manager, is to optimise the room for its new configuration. This can be done by placing baffles on newly-empty racks so that cold air isn’t wasted, or by erecting temporary walls around remaining racks. In either case, you’ll create a smaller volume that needs less cooling. He also recommends throttling back cooling and UPSes because they’ll be more efficient under lighter workloads. Those running dual cooling units, he suggests, could run one at a time and switch between units.

An old data centre can also be useful as a disaster recovery site. Plenty of organisations outsource a DR site. Newly-freed space in one facility gives you run your own.

The world has an insatiable appetite for data centres, so renting out your spare space is another option. But a Gartner study titled “Renting Your Excess Data Center Capacity Delivers More Risk Than Reward” may deter you from exploring it. Analyst Bob Gill warns that most organisations just aren’t ready to become a hosting provider, because it’s a specialist business. It’s also a business in which clients have very high expectations that would-be-hosts have to learn in a hurry. He also worries about reputational risk flowing from news of a security breach.

Gill also notes that becoming a service provider means signing away data centre space for years, depriving you of an asset you could conceivably covet before contractual obligations expire.

Gill doesn’t rule out the idea, but says “the complexities and risks of offering commercial colocation will confine the majority of such offerings to educational and governmental agencies and vertical industry ecosystems”.

At this point of the story many of you may also be wondering if a partly-empty bit barn is a useful place to store booze. Sadly it’s not a good idea: wine is best stored at between 7C and 18C, but modern data centres run in the low twenties. Beer won’t be much fun at that temperature. Data centres also tend to be rather lighter than a good wine cellar.

Bit barns are also be a poor place to store documents, because they tend to run a little wetter than the United States Library of Congress’ recommended 35 per cent relative humidity. Paper also prefers to be away from air currents and data centres are full of them.

But a partly-empty data centre is a good place to store … computers. Which may seem stupidly obvious until you consider that some workloads just aren’t a good fit for the cloud. Vertiv’s Anderson suggested that video surveillance footage is so voluminous that moving it to the cloud is costly and slow, but that some spare racks could happily let you consolidate video storage.

Emerging workloads like AI and big data can also demand very hot and power-hungry servers. A dense, full, data centre may have struggled to house that kind of application. A data centre with some empty racks may be able to accommodate those workloads. That such applications tend to use large data sets and have high compute-affinity – they work best when storage and servers are physically close – makes them strong candidates for on-premises operations.

A final option is just to knock over the walls of a data centre, lay down some carpet and use the space for whatever purpose you fancy.

If you’ve found a good use for un-used data centre space, feel free to let me know or hit the comments. ®

Bootnote: A final treat, brought to El Reg‘s attention by a chap named John White: an IBM faux cop show from the golden age of server consolidation, of a standard to rank with vendors’ dire rock video pastiches. Enjoy. If you can.

Youtube Video

* Vertiv used to be Emerson Network Power

Microsoft adds new tools for Azure

The content below is taken from the original (Microsoft adds new tools for Azure), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft announced several new Azure tools and features focused on migrating data to the cloud and helping businesses work securely with other companies and customers:

  • The Cloud Migration Assessment analyzes servers and hardware, providing a report on the costs and benefits of moving data over to Azure
  • The Azure Hybrid Use Benefit allows customers moving data over to Azure to get a discount of up to 40 percent on Windows Server licenses and virtual machines. Read our previous MPC post about this offer here
  • The Azure Site Recovery tool helps those coming over from another cloud provider, such as Amazon Web Services or VMware, to move information and applications to Windows Server

 

Also announced were new tools for Azure Active Directory, an identity and access management system, focused on business-to-business collaboration and ways for companies to store customer information:

  • A business can now grant access to a vendor or partner to work on a cloud-based project or document using already existing credentials, such as an email address
  • In Europe, Azure users will now be able to securely store information from customers who sign into an app or service via social media

 

Learn more about how each of these tools can benefit you in the original GeekWire blog post. Other questions or feedback? Please post them here!

Amazon Lightsail vs DigitalOcean: Why Amazon needs to offer something better

The content below is taken from the original (Amazon Lightsail vs DigitalOcean: Why Amazon needs to offer something better), to continue reading please visit the site. Remember to respect the Author & Copyright.

Amazon Lightsail vs DigitalOcean

In the first part of this series, Amazon Lightsail: how to set up your first instance, we introduced Amazon Lightsail, a low-cost virtual private server (VPS) platform from Amazon Web Services. As we saw in that post, it’s very simple to create a small infrastructure with Lightsail. With its launch, Amazon is trying to get a slice of the lucrative VPS market. The market has matured over the last few years and a number of players have a head start. Vendors like DigitalOcean or Linode have a large customer base.  These companies are continuously improving their VPS features and Amazon will need to catch up fairly soon if it wants to capture a good market share. Also, these VPS providers have some big clients on their books, such as Atlassian, Creative Commons, or RedHat to name a few. This high level of trust is the result of the continuous expansion of data centers, the reliability of service, and the value customers get for their investment. In this post we will take a look at Amazon Lightsail vs DigitalOcean and why Amazon needs to offer something better.
Competing against these niche players means that Amazon will not only have to quickly include some of the features users now take for granted, but it will also have to differentiate itself with extra features that others are still lacking. In fact, in our opinion, the competition is just getting started.

Amazon Lightsail vs. DigitalOcean for VPS

To understand how Amazon Lightsail stands against its established competitors in the market, we decided to compare it with DigitalOcean, a widely popular VPS provider on a number of areas:

Data Center Locations

Amazon is always expanding its regions, which means that Lightsail would also be available in new regions over time.  This will give users in different parts of the world greater network proximity to their servers. At the time of this post, AWS has the following regions, in addition to another region in China:
AWS Regions
However, Lightsail is available only in the us-east-1 region.
Amazon Lightsail Region
DigitalOcean has data centers in the following countries:
DigitalOcean Data Center Regions

Instance Sizes and Types

Amazon EC2 has a large number of instance types, ranging from micro instances with less than a GB of memory to large, disk or memory optimized servers. Each of these instance types can be further enhanced with extra storage volumes with provisioned IOPs.
Lightsail has a much simpler instance model with only five types of servers.
Amazon Lightsail Instance Types
This is expected because its target audience is developers or start-ups who don’t want to spend a lot of time comparing a lot of price performance ratios.
DigitalOcean has nine instance types for its "standard" instances.
DigitalOcean Standard Instance Types
And a few more in its "high memory" category:
DigitalOcean High Memory Instance Types
As we can see, the high-end instances are suitable for large scale data processing and storage. This again proves that the company is targeting not only individuals or start-ups, but also corporate clients who are willing to pay extra money for their workloads.

Pricing

Referencing the images above, we can see that the pricing for Amazon Lightsail instances is very similar to that of DigitalOcean Droplets for the same server specs.

Base OS Images

As of March 2017, Amazon Lightsail comes bundled with only two operating systems: Ubuntu 16.04 LTS (Long Term Support) or Amazon Linux 2016.09.01:
AWS_Lightsail_OS_Image
DigitalOcean offers a number of open source Nix based operating system images, each with different versions.
DigitalOcean Base OS Images

Application Images

VPS providers also offer something called "application images." These are generic installations of applications bundled with a base operating system. With application images, users don’t need to install applications after creating a server, and this significantly saves time. Some popular application packs are LAMP stack, Gitlab, or Node.js, which are baked in with operating systems like CentOS or Ubuntu.
Amazon Lightsail currently has a limited, but good collection of instance images.
Amazon Lightsail Application Images
DigitalOcean has a larger collection of "One-click apps" too:
DigitalOcean One-click Apps

User Data Scripts

User data scripts are special pre-written code blocks that run when a VPS instance is created. A common use case for user data is automating installation of applications, users or configuration files. For example, a server can be made to install a particular version of Java as it comes up. The developer would write a script to do this and put it in the user data section when creating the server. This saves time in two ways: when rolling out a number of instances, administrators don’t have to manually install applications or change configuration in each instance, and secondly, each instance will have a uniform installation, eliminating any chance of manual error. User data has been available for Amazon EC2 instances for a long time and is widely adopted for system automation.
Amazon Lightsail calls it "Launch Script" and DigitalOcean calls it "User data", but they are essentially the same.
Amazon Lightsail User Data
DigitalOcean User Data

SSH Access

Both Amazon Lightsail and DigitalOcean allow users SSH access from a web console. Most practical uses cases though require SSH access from OS shell prompt or a tool like PuTTy. Authentication can be done either with username and password or preferably with more secure SSH keys.
Amazon Lightsail allows SSH key access only, which is good for security. Users can create a new SSH key, upload their own public key or use an existing key when creating an instance.
Generating New AWS Lightsail SSH Key
Uploading Existing SSH Key to Amazon Lightsail
Managing SSH Keys from Amazon Lightsail Console
DigitalOcean offers both key-based and password-based authentication. The choice of SSH key is optional. If no SSH key is chosen or created, the user is sent an e-mail with a temporary password for the root account. Upon first login, the user needs to change that password. The image below shows how new SSH keys can be created in DigitalOcean.
Adding New SSH Key in DigitalOcean
Manage SSH Key in DigitalOcean
 
Note that unlike Lightsail, DigitalOcean does not offer a key generation facility.

Adding Extra Volumes

Sometimes the data in a server will outgrow its original capacity. When disk space runs out and data cannot be deleted or archived, extra disk space needs to be added. Typically this involves creating one or more additional disk volumes and attaching to the instance.
For Amazon EC2, this is possible with Elastic Block Storage (EBS). Amazon Lightsail is yet to add this feature. DigitalOcean on another hand, has only recently added it for users.
Creating DigitalOcean Block Storage Volume
DigitalOcean volumes can be attached during instance creation as well, but that facility is available in only selected regions.

Resizing Instances

Adding extra storage is one way to expand a server. Sometimes the instance may need extra computing power too. This can be done by adding more CPU and RAM to the server. Although this is fairly simple in EC2, we could not find a way to resize a Lightsail instance once it was created.
Again DigitalOcean wins in this area. It allows users to up-size the instance either with CPU and RAM only or with CPU, RAM, and disk. The first option allows the instance to be downsized again.
DigitalOcean offers Instance Resize Option

Data Protection

VPS snapshots are like "point in time" copies of the server instance. This is necessary for protection against data loss, data corruption, or simply creating a separate image from an existing instance. Creating a snapshot for an existing instance is a simple process in Lightsail:
Amazon Lightsail Instance Snapshot
If the instance is deleted for some reason, it can be recovered from a snapshot, if one exists.
Amazon Lightsail Snapshot List
However, there is no simple way to automate the snapshots process. Of course, this can be automated with a bit of scripting and scheduling a job from another server, but we could not find the feature as a native option.
DigitalOcean also offers snapshots:
DigitalOcean Snapshots
However, there is also a scheduled backup option which can snapshot an instance once every week.
DigitalOcean Scheduled Backups

Performance Monitoring and Alerting

Performance monitor dashboards are present in both Amazon Lightsail and DigitalOcean.
With Lightsail, the performance counters are similar to what’s available for EC2 in CloudWatch: CPUUtilization, NetworkIn, NetworkOut, StatusCheckFailed, StatusCheckFailed_Instance and StatusCheckFailed_System. The metrics can be viewed over a period of two weeks. However, unlike CloudWatch for EC2, it’s not possible to create an alert on a metric.
DigitalOcean has a graph option for its Droplets: this would show the Droplet’s public network usage, CPU usage, and disk IO rate. In recent times it also added a feature where users can opt to capture more metrics. For existing Droplets, users can install a script, and for new Droplets, they can enable a monitoring option. With the monitoring agent installed, three more metrics are added: memory, disk usage and top processes sorted by CPU or memory.
DigitalOcean Droplet Metrics
Furthermore, it’s also possible to create alerts based on any of these metrics. The alerts can be sent to an e-mail address or a Slack channel.
DigitalOcean Monitoring Alert

Networking Features

Static IP

Amazon Lightsail and DigitalOcean both allow users to attach "static IPs" to their server instances. A static IP is just like a public IP because it’s accessible from the Internet. However as the name suggests, static IPs don’t change with instance reboots. Without a static IP, an instance will get a new public IP every time it’s rebooted. When a static IP is attached to an instance, that IP remains assigned to the instance through system reboots. This is useful for internet facing applications like web or proxy servers.
In Amazon Lightsail, a static IP address can be assigned to an instance or kept as a standalone resource. Also, the IP can be re-assigned to another instance when necessary.
Creating AWS Lightsail Static IP
DigitalOcean has a slightly different approach. Here, the public IP assigned to the instance doesn’t change even after system goes through a power cycle (hard rebooted) or power off / power on. It also offers something called "Floating IP" which is essentially same as static IP. A floating IP can be assigned to an instance and if necessary, detached and reattached to another instance. This allows Internet traffic to be redirected to different machines when necessary. The image below shows how floating IPs are managed.

Assigning DigitalOcean Floating IPs to Instances Private Networking

An Amazon Lightsail instance comes with a private IP address by default.
Amazon Lightsail Instance Public & Private IP
For DigitalOcean, this has to be enabled when the Droplet is created.

Enabling DigitalOcean Droplet Networking Features
IPv6

We could not find any option for enabling IP v6 for Lightsail instances. As shown above, this is possible with DigitalOcean instances.

DNS

Amazon Lightsail enables users to create multiple DNS zones (up to three DNZ zones are free). This is a great feature and very simple to set up. Users who have already registered domain names can create DNS zones for multiple sub-domains and map them to static IP addresses. Those static IPs can, in turn, be assigned to Lightsail instances. The image below shows how we are creating a DNS zone for our test website.
Creating a DNS Zone in Amazon Lightsail
Creating a DNS Zone in Amazon Lightsail
Once
Creating a DNS Zone in Amazon Lightsail
Lightsail provides its own DNS name servers for users to configure their domain records. Users can also register their domain names with Amazon Route 53 without having to use another third-party domain name registrar.
A similar facility exists in DigitalOcean, except it allows users to create reverse domain lookup with PTR records.
Adding Domain Names in DigitalOcean
Creating PTR Records in DigitalOcean

Firewall Rules

This is an area where Amazon Lightsail fares better than DigitalOcean. With EC2 instances, AWS offers a firewall feature called "security groups". Security groups can control the flow of traffic for certain ports from one or more IP addresses or ranges of addresses. In Lightsail, the security group feature is present in a rudimentary form.
Amazon Lightsail Firewall Rules
There is no finer grain control though: there is no way to restrict traffic from one or more IP addresses.
DigitalOcean Droplets do not have this feature. Any firewall rules have to be configured from within the instance itself.

Other Security Features

Both Amazon Web Service and DigitalOcean console offer two-factor authentication. With Amazon, it’s possible to enable CloudTrail logs which can track every API action run against resources like EC2. Lightsail has a rudimentary form of this audit trail ("Instance history"), and so does DigitalOcean ("Security history").

Access to Outside Service Endpoints

This is an area where Amazon Lightsail clearly wins. It’s possible for Lightsail instances to access existing AWS resources and services. This is possible when VPC peering for Lightsail is enabled. Lightsail instances run within a VPC which is not available from the regular VPC screen of AWS console. Unless VPCs are "peered", they are separate networks and resources in one VPC cannot see resources in another. Peering makes it possible. It is possible to configure VPC peering for the "shadow VPC" Lightsail uses. This is configured from the advanced features screen.
Amazon Lightsail VPC Peering
With VPC peering enabled, Lightsail’s capabilities can be extended beyond a simple computing platform, something DigitalOcean cannot provide.

Load Balancers

Load balancers are a great way to distribute incoming network traffic to more than one computing node. This can help the infrastructure become more resilient against failures or distribute read and write traffic evenly across the servers. When application traffic reaches a load balancer, it can send it to a node in the group either in round robin fashion or based on a specific algorithm. Any node not responding to traffic from the load balancer will be marked as "Out Of Service" after a number of attempts.
Although it would help developers test their applications for real-life use cases, Amazon Lightsail is yet to provide this feature.
DigitalOcean has recently added it to their offering, but it’s not cheap, it costs $20 per month.
Adding DigitalOcean Load Balancer
DigitalOcean Load Balancer Forwarding Rules
DigitalOcean Load Balancer Advanced Settings

Billing Alert

AWS Billing Alert is a great way for customers to keep track of their cloud infrastructure spending. With billing alerts, AWS will send an automatic notification to a customer when its monthly AWS spending goes over a set limit. Typically the alert is set up to send an e-mail. Billing alert is a feature of CloudWatch metrics and it can be used to for Lightsail usage:
AWS Billing Alert for Lightsail
AWS Billing Alert for Lightsail
DigitalOcean has a similar feature for billing alerts.
DigitalOcean Billing AlertUnlike AWS though, DigitalOcean would send the notification to an e-mail address only. With AWS, the alert can be sent to an SNS topic which can have a number of subscribing endpoints like e-mail, SMS, application or HTTP.

API

Both Lightsail and DigitalOcean have extensive API support for programmatic access and administration of their infrastructure. Both vendors make the documentation well accessible from their public websites.
Lightsail APIs are easily accessible from the AWS command line interface (CLI).  There are also software development kits (SDK) available for a number of programming languages like Java, Python, Ruby, PHP, C#, Go, JavaScript, Node.js and C++.
DigitalOcean APIs are fairly extensive as well and their documentation shows how they can be invoked with HTTP payloads. Language support includes Ruby and Go. Unline AWS, DigitalOcean does not come with any CLI which can be automated with bash or PowerShell.
Third party tools like Terraform from HashiCorp also have a limited number of resources available for both Lightsail and DigitalOcean provider.

Documentation and Support

Online documentation for both Amazon Lightsail and DigitalOcean is easy-to-follow and can help a user get up and running in no time. Technical support request for Lightsail can be accessed from the AWS console. A similar link exists for DigitalOcean users in its web site.
DigitalOcean also offers a vast array of very useful tutorials. These tutorials can help users set up and run many different workloads on the DigitalOcean platform.

Conclusion

From our test comparison, we found DigitalOcean leading Amazon Lightsail in quite a few important areas. So does this mean developers and start-ups should shun Lightsail for now? We would say no. It depends on individual use cases and whether your organization is already an Amazon customer. Lightsail’s integration with other AWS services provides it an obvious advantage. Also, since the price tag for similar instances is very much similar, you may want to work with Lightsail unless your application requires some of the features it’s lacking… Typical uses cases can include:

  • Small, disposable servers for Proof of Concept (PoC) of larger projects
  • Development and tests servers for small teams
  • Departmental servers for non-IT business units who don’t want to spend money for high-end resources
  • Personal-use servers for storing video, audio, and other digital assets

Also, with AWS making a move into the VPS market, it’s only a matter of time before other players like Microsoft or Google start to include it in their arsenal. As the competition starts to gain momentum, more advanced features are sure to follow. Needless to say, established VPS providers wouldn’t be sitting idle either, they would be adding new features to keep their competitive advantage. With this in mind, we think Amazon needs to add some extra niche capabilities to it VPS platform to make it a more viable competitor.

Cloud Speech API is now generally available

The content below is taken from the original (Cloud Speech API is now generally available), to continue reading please visit the site. Remember to respect the Author & Copyright.

By Dan Aharon, Product Manager

Last summer, we launched an open beta for Cloud Speech API, our Automatic Speech Recognition (ASR) service. Since then, we’ve had thousands of customers help us improve the quality of service, and we’re proud to announce that as of today Cloud Speech API is now generally available.

Cloud Speech API is built on the core technology that powers speech recognition for other Google products (e.g., Google Search, Google Now, Google Assistant), but has been adapted to better fit the needs of Google Cloud customers. Cloud Speech API is one of several pre-trained machine-learning models available for common tasks like video analysis, image analysis, text analysis and dynamic translation.

With great feedback from customers and partners, we’re happy to share that we have new features and performance to announce:

  • Improved transcription accuracy for long-form audio
  • Faster processing, typically 3x faster than the prior version for batch scenarios
  • Expanded file format support, now including WAV, Opus and Speex

Among early adopters of Cloud Speech API, we have seen two main use cases emerge: speech as a control method for applications and devices like voice search, voice commands and Interactive Voice Response (IVR); and also in speech analytics. Speech analytics opens up a hugely interesting set of capabilities around difficult problems e.g., real-time insights from call centers.

Houston, Texas based InteractiveTel is using Cloud Speech API in solutions that track, monitor and report on dealer-customer interactions by telephone.

“Google Cloud Speech API performs highly accurate speech-to-text transcription in near-real-time. The higher accuracy rates mean we can help dealers get the most out of phone interactions with their customers and increase sales.” — Gary Graves, CTO and Co-Founder, InterActiveTel

Saitama, Japan-based Clarion uses Cloud Speech API to power its in-car navigation and entertainment systems.

“Clarion is a world-leader in safe and smart technology. That’s why we work with Google. With high-quality speech recognition across more than 80 languages, the Cloud Speech API combined with the Google Places API helps our drivers get to their destinations safely.” — Hirohisa Miyazawa, Senior Manager/Chief Manager, Smart Cockpit Strategy Office, Clarion Co., Ltd.

Cloud Speech API is available today. Please click here to learn more.

Azure File Storage on-premises access for Ubuntu 17.04

The content below is taken from the original (Azure File Storage on-premises access for Ubuntu 17.04), to continue reading please visit the site. Remember to respect the Author & Copyright.

Azure File Storage is a service that offers shared File Storage for any OS that implements the supported SMB Protocol. Since GA we supported both Windows and Linux. However, on premises access was only available to Windows. While Windows customers widely use this capability, we have received the feedback that Linux customers wanted to do the same. And with this capability Linux access will be extending beyond the storage account region to cross region as well as on premises. Today we are happy to announce Azure File Storage on-premises access from across all regions for our first Linux distribution – Ubuntu 17.04. This support is right out of the box and no extra setup is needed.

How to Access Azure File Share from On-Prem Ubuntu 17.04

Steps to access Azure File Share from an on-premises Ubuntu 17.04 or Azure Linux VM are the same.

Step 1: Check to see if TCP 445 is accessible through your firewall. You can test to see if the port is open using the following command:

nmap <azure storage account>.file.core.windows.net

image

Step 2: Copy the command from Azure Portal or replace <storage account name>, <file share name>, <mountpoint> and <storage account key> on the mount command below. Learn more about mounting at  how to use Azure File on Linux.

sudo mount -t cifs //<storage account name>http://.file.core.windows.net/<file share name> <mountpoint> -o vers=3.0,username=<storage account name>,password=<storage account key>,dir_mode=0777,file_mode=0777,sec=ntlmssp

Step 3: Once mounted, you can perform file-operations

clip_image001[8]

Other Linux Distributions

Backporting of this enhancement to Ubuntu 16.04 and 16.10 is in progress and can be tracked here: CIFS: Enable encryption for SMB3. RHEL is also in progress. Full-support will be released with next release of RHEL.

Summary and Next Steps

We are excited to see tremendous adoption of Azure File Storage. You can try Azure File storage by getting started in under 5 minutes. Further information and detailed documentation links are provided below.

We will continue to enhance the Azure File Storage based on your feedback. If you have any comments, requests, or issues, you can use the following channels to reach out to us:

Indian bar legally evades closure by adding 250-meter long maze entrance

The content below is taken from the original (Indian bar legally evades closure by adding 250-meter long maze entrance), to continue reading please visit the site. Remember to respect the Author & Copyright.


  • anchor

    Indian bar legally evades closure by adding 250-meter long maze entrance




    Julia Ingalls

    Apr 18, ’17 12:38 PM EST





    The maze that legally lengthens the walking distance from the highway. Image: BCCL via India Times

    The maze that legally lengthens the walking distance from the highway. Image: BCCL via India Times