IoT devices or humans?

The content below is taken from the original (IoT devices or humans?), to continue reading please visit the site. Remember to respect the Author & Copyright.

A Swedish rail line can now collect fares by scanning its customers for embedded biometric chips. The primary benefit is the elimination of a physical ticket — plus it’s harder to lose. It sounds futuristic, but my dogs have been sporting embedded chips for over a decade.

If you think about it, physical tickets are kind of a silly. They are a surrogate for the person. The practice of scanning a ticket, instead of a person, was likely established when there just weren’t many viable alternatives. Technology now offers a more direct approach.

There are lots of examples in which companies attempt to track their customers be it frequent customer cards, credit card numbers, tickets, or other surrogates. But ultimately, wouldn’t it be better to just track the actual human customer? Not everyone is willing to embed chips in their skin, but there’s other options.

One of the better examples of equipping customers with trackable technology comes from the Disney theme parks. Disney’s MagicBands, introduced in 2013, are RFID bracelets provided to park visitors. Customers wear them because the bands offer several benefits including shorter lines, documented park history, and can eliminate the need to carry park tickets, hotel room keys, and credit cards.

Because RFID can be used for location tracking, the bands contribute to magical experiences. For example, you can order food, and then like magic, it is delivered to wherever you chose to sit.

Disney likes the MagicBands because they generate data about customer visits – where they go, how much they spend, and how long they wait in lines. The bands also generate incremental revenue opportunities. For example, park photographers can use them to identify faces and sell more photos.

The customer-as-IoT journey is just beginning. Some of the designers behind Disney’s MagicBands are now leading the next big evolution at Carnival Cruise Lines. This November, the Carnival Regal Princess will be the first ship to offer “Ocean Medallion Class” that utilizes the Ocean Medallion wearable. It’s the first ship in an aggressive vision that could extend across the entire fleet.

Carnival holds more than 40% of global cruise market share. In addition to its own flagship brand, the company has nine other cruise lines including Princess and Holland America. The Ocean Medallion program will certainly change cruising and has the potential to cause waves across the entire hospitality industry.

The Ocean Medallion program has three major components: the Ocean Medallion, a ship outfitted with sensors and displays, and the OCEAN application. The medallion is a small waterproof wearable that passengers receive before their trip even begins. It can be worn as a pendant, a pin, or a bracelet. The medallion communicates with newly installed receivers throughout the ship. OCEAN, or One Cruise Experience Access Network, is the back-end application that powers the Ocean Medallion experience.

Each medallion is outfitted with Bluetooth and NFC radios. Carnival is installing about 16 sensors for every passenger to ensure complete coverage throughout its ships. On the Carnival Regal Princess that meant about 7000 sensors and 75 miles of new cable. The entire retrofit includes new access points, hundreds of public wall-mounted touchscreens, and new Bluetooth enabled door-locks.  

RELATED: IoT catches on in New England fishing town

The Ocean Medallion program has objectives to increase revenue and decrease costs. On the revenue side, it creates additional and frictionless revenue opportunities. On the costs side, it allows staff to work more efficiently. Most significantly though, the program is designed to create personalized experiences intended to increase satisfaction and drive recurring business.

Cruise ships are sometimes described as giant floating hotels, but that’s a dramatic oversimplification. In addition to the fact they move, there’s some major operational differences. For example, the check-in process is much more intense on ships since thousands of guests check in within just a few hours. The medallion simplifies this immensely. Since guests receive their medallion before their cruise, check-in is just a matter of boarding the ship and unlocking a stateroom.

Cruise ships have been steadily growing in size which means more and more diverse programs and activities. The menu of choices is becoming overwhelming. Carnival intends to use OCEAN to match its customers to the right activities. OCEAN notes what you do, where you eat, what you buy, and what activities are of interest. It’s continuously updating its understanding of each passenger to improve its recommendations. It is conceptually similar to the recommendation engines implemented by Netflix and Amazon.

Passengers interact with OCEAN via touchscreens throughout the ship. No need to login as it knows when you are in front of it. These screens provide access to all kinds of up-to-date, personalized information including reservations and the location of those traveling with you.

The Ocean Medallion program is the cutting edge of wearable technology. Carnival and its partners Nytec and Level 11 created new systems and hardware to make it all work.

In subsequent posts, I will profile some of these stories – such as why they went with Bluetooth and NFC in the wearable, how they turned Bluetooth upside down, and some of the new types of services that become possible when so much customer information is known.

Many of these concepts will eventually appear in traditional hotels. Today, most frequent guest programs offer general incentives, but do little to personalize the experience. For example, Marriott hotels allow me to enter my Netflix credentials for in-room entertainment. Their systems erase it upon check-out, but why not automatically load it on check-in. 

Tracking customers will change many industries, and it won’t all happen through wearables. Video recognition and analytics use cases are also emerging. Jet Blue has begun using facial recognition in lieu of boarding passes.

What’s particularly noteworthy about the Ocean Medallion program is it is poised to revolutionize an old, disconnected industry. The IoT opportunity to transform industries knows few restrictions, and it will change products and services all around us.

Join the Network World communities on

Facebook

and

LinkedIn

to comment on topics that are top of mind.

Google Glass is apparently still around — and just got its first update in nearly three years

The content below is taken from the original (Google Glass is apparently still around — and just got its first update in nearly three years), to continue reading please visit the site. Remember to respect the Author & Copyright.

Dust off your Google Glasses, those who still have them, the $1500 face computer is back in the spotlight today with a few updates.

Today, in its first update since September 2014, Google Glass just got a “MyGlass” companion app update, some bug fixes and now supports Bluetooth. That means the new “XE23” version can now hook up mice, keyboards and other Bluetooth-enabled objects to their Glass device.

The app update rolled out yesterday and, in an even bigger surprise, the firmware update for Glass came out today.

So, Glass is alive? Well, yes, but it never really died. Despite seeming to go the way of the Dodo (you can’t buy it anymore and Google shut down the website in 2015), it never really left us, it just “graduated” from Google X after failing to capture consumer attention. Google then quietly moved it into the enterprise. But, apparently, someone at Google is still working on the dork-inducing consumer version.

We don’t know why Google chose to release these two updates. It’s odd for an update to pop up after nearly three years — especially one without too much of a difference to the old version. But it shows Google has not completely forgotten about its optical mounted wearable.

Russian freight service Deliver closes seed round of $8M, with European plans

The content below is taken from the original (Russian freight service Deliver closes seed round of $8M, with European plans), to continue reading please visit the site. Remember to respect the Author & Copyright.

Russia-based online freight service Deliver (formerly iCanDeliver) has closed a seed round of $8 million. The startup automates the process of ordering freight transport and makes things more efficient by finding the closest sender.

Inventure Partners (which has invested in Gett, Busfor, Amwell, Chronext and Netology) invested $3 million, joining A&NN Group and Singapore-based Amereus Group as backers.

Deliver’s competitors include Palletter (in Estonia) which matches shipments to nearby trucks in real-time and Convargo (in France) which has raised a $1.69M round and which also connects shippers with truckers.

Deliver is doubling down on its largest base in Russia where it has over 59,000 confirmed drivers registered, so far. Deliver now hopes to secure up to 15% of the Russian freight market, with plans to begin European expansion and promote more international transportation. European haulage networks were worth $96 billion in 2015.

Up to 23% of trucks have “empty kilometers” because of the ineffective transport planning involved which is why these startups have sprung up. Deliver can calculate shipping prices in seconds, handles several thousand orders a month and claims an average monthly growth of 30%. It finished beta testing in January of this year and entered into the active sales and regional development phase.

Founder Danil Rudakov says: “The efficiency of the logistics market today is extremely low in Russia and in Europe. Deliver guarantees shippers full safety of dispatch, transparent pricing, and high quality transportation.”

Decoding NRSC-5 with SDR to Get In Your Car

The content below is taken from the original (Decoding NRSC-5 with SDR to Get In Your Car), to continue reading please visit the site. Remember to respect the Author & Copyright.

Decoding NRSC-5 with SDR to Get In Your Car

NRSC-5 is a high-definition radio standard, used primarily in the United States. It allows for digital and analog transmissions to share the original FM bandwidth allocations. Theori are a cybersecurity research startup in the US, and have set out to build a receiver that can capture and decode these signals for research purposes, and documented it online.

Their research began on the NRSC website, where the NRSC-5 standard is documented, however the team notes that the audio compression details are conspicuously missing. They then step through the physical layer, multiplexing layer, and finally the application layer, taking apart the standard piece by piece. This all culminates in the group’s development of an open-source receiver for NRSC-5 that works with RTL-SDR – perhaps the most ubiquitous SDR platform in the world. 

The group’s primary interest in NRSC-5 is its presence in cars as a part of in-car entertainment systems. As NRSC-5 allows data to be transmitted in various formats, the group suspects there may be security implications for vehicles that do not securely process this data — getting inside your car through the entertainment system by sending bad ID3 tags, for instance. We look forward to seeing results of this ongoing research.

[Thanks to Gary McMaster for the tip!]

Posted in radio hacksTagged , , , ,

Dublin mayor considers plan to let wheelchair users into cycle lanes

The content below is taken from the original (Dublin mayor considers plan to let wheelchair users into cycle lanes), to continue reading please visit the site. Remember to respect the Author & Copyright.

Chair of Dublin Cycling Campaign cautiously welcomes idea

Cycle lane

People in wheelchairs may soon be able to use cycle lanes in Dublin if a plan by the city’s lord mayor Brendan Carr goes ahead.

Azure Marketplace Test Drive

The content below is taken from the original (Azure Marketplace Test Drive), to continue reading please visit the site. Remember to respect the Author & Copyright.

Azure Marketplace provides a rich catalog of thousands of products and solutions from independent software vendors (ISVs) that have been certified and optimized to run on Azure. In addition to finding and deploying ISV products, customers often use Azure Marketplace as a learning tool to discover and evaluate products. One feature in Azure Marketplace that is especially useful for learning about products is “Test Drive.” 

Test Drives are ready to go environments that allow you to experience a product for free without needing an Azure subscription. An additional benefit with a Test Drive is that it is pre-provisioned – you don’t have to download, set up or configure the product and can instead spend your time on evaluating the user experience, key features, and benefits of the product.

To get started with a Test Drive, follow this 3-step process:

  1. Visit the Test Drive page on Azure MarketplaceTest Drive
  2. Choose a Test Drive, sign in and agree to the terms of use.
  3. Once you complete the form, your Test Drive will start deploying and in a few minutes you will get an email notification that the environment is ready. Just follow instructions in the email, and you will be able to access a fully provisioned and ready to use environment.

Once provisioned, the Test Drive will be available for a limited time, typically a few hours. After the Test Drive is over, you will receive an email with the instructions to purchase or continue using the product.

As you start thinking about your next DevOps tool or Web application firewall, consider using Test Drives. It is easy, free, and the hands-on experience will help you make the right decision.

Happy Test Driving.

Atari is Making a New Retro-Style Home Console

The content below is taken from the original (Atari is Making a New Retro-Style Home Console), to continue reading please visit the site. Remember to respect the Author & Copyright.

Atari Box

Atari was once the most dominant force in the gaming industry. Thanks to the video game crash of 1983, Atari was reduced to being a shell of its former self. Nearly 20 years […]

The post Atari is Making a New Retro-Style Home Console appeared first on Geek.com.

Girl Scouts can start earning cybersecurity badges in fall 2018

The content below is taken from the original (Girl Scouts can start earning cybersecurity badges in fall 2018), to continue reading please visit the site. Remember to respect the Author & Copyright.

If your office in the future dodges a spearphishing attempt in the future, you might be thanking a Girl Scout. The organization partnered with Palo Alto Networks to release 18 new cybersecurity badges for members to earn over the next two years, with the first slated to come out in September 2018.

They won’t just be about minimizing hacking vectors: Younger Scouts will also learn about data privacy, cyberbullying and how to protect themselves online. Badges for older ones will focus on developing coding skills, learning about white hat hacking and creating firewalls. While preventative training has been erratically present in Scouting for some time — the Boy Scouts, for example, have had the Cyber Chip youth internet safety certification since 2012 — the Girl Scouts’ new set of badges looks to span a respectable breadth of online issues and opportunities.

If you’re surprised the Girl Scouts have a new badge teaching important tech literacy, you haven’t been paying attention. Back in 2011, the organization added ones for Computer Expert and Digital Movie Maker followed by an attempt in 2013 to introduce one for video games. Following the release of a badge dedicated to nurturing science, technology, engineering and math interests in 2015, the Girl Scouts partnered with Netflix last October to encourage young members to pursue STEM careers.

Via: CNN

Source: Palo Alto Networks, Girl Scouts of the United States of America

etcher (1.0.0)

The content below is taken from the original (etcher (1.0.0)), to continue reading please visit the site. Remember to respect the Author & Copyright.

Flash OS images to SD cards and USB drives, safely and easily

What Is Windows Server 2016 Hyper-V Compute Resiliency?

The content below is taken from the original (What Is Windows Server 2016 Hyper-V Compute Resiliency?), to continue reading please visit the site. Remember to respect the Author & Copyright.

In this post, I will explain Compute Resiliency. This feature of Windows Server 2016, where Failover Clustering is more tolerant of transient failures, can cause downtime to Hyper-V virtual machines.

 

 

Unnecessary Failovers

Many people, especially those new to high availability or building complex environments, find that Failover Clustering can be difficult. If you stick to the well-walked path, the designs are not that hard. The things that cause the most trouble are the things that should be dependable, such as drivers and firmware in network cards. Unfortunately, these unpredictable hardware faults and external issues, such as switch reboots or operators pulling the wrong cables, can cause transient issues. Keep in mind, these can also be predictable hardware faults and external issues depending on the brand of the network interface. Regardless, this can be very difficult to troubleshoot and can lead to downtime.

Every node or host in a Hyper-V cluster sends a heartbeat to the cluster. This heartbeat lets the other nodes know that the sending host is still alive. If a host fails to send a heartbeat for a long enough period, then that host is assumed to be offline. The remaining nodes in the cluster seize the clustered roles, or virtual machines in the case of Hyper-V, from the assumed-dead node.

Sponsored

If a transient networking issue interferes with the heartbeat of a host, then the cluster assumes that there is a problem. It seizes the virtual machines from that host. The virtual machines are booted up on other nodes in the cluster. If there are complex dependencies, then booting up a large number of virtual machines might take a long time. In the meantime, the transient issue has gone away and the original host is back online. The problem with transient issues is that they repeat and they are extremely difficult to identify. If they happen enough, people can lose confidence in the cluster. The cluster is reacting correctly to an external fault but it still creates confidence issues

Tolerance of Transient Issues

Microsoft studied its support calls and received tons of feedback from customers regarding issues with Hyper-V clusters. It was clear that issues outside of clustering was causing many problems. Software has the flexibility to overcome hard issues. Therefore, Microsoft decided to build extra tolerance for transient external issues into Hyper-V failover clusters in the form of Compute Resiliency.

In short, Compute Resiliency slows down the aggressive failover actions of a Hyper-V cluster. Most actual host outages are caused by external problems. Microsoft did the math and decided that by default, a cluster will wait 4 minutes before responding to a host failing to heartbeat. The 4 minutes is enough time for an operator to realize that they have pulled the wrong cable or for a top-of-rack switch to restart after a crash. During this time, a non-responding host has a status of Isolated in the cluster and failovers will not occur.

If a host fails to return online after 4 minutes have passed, then the cluster will initiate a failover of every virtual machine. The virtual machines will behave differently depending on your storage system:

  • SMB 3.0: If the host is online and able to communicate with the storage, the virtual machines remain online.
  • CSV on Block Storage: The virtual machine is placed into a Paused-Critical state.

If a host returns online before 4 minutes have expired, then it rejoins the cluster. What if the host goes offline again? Once again, the host has a status of Isolated and failovers will not take place. The default time is 2 hours. If the host becomes isolated for a third time in a 2 hour period, then the cluster will place that host into a Quarantined state. It will live migrate the virtual machines to more suitable hosts in the cluster.

Sponsored

Note that the times mentioned in this post, 4 minutes and 2 hours, are defaults and can be overridden. The 4-minute wait can be modified on a per-virtual machine basis. Compute Resiliency can be disabled on the cluster. This might make sense for clusters where transient issues are unlikely to isolate hosts or a completely self-contained cluster, such as a cluster-in-a-box.

The post What Is Windows Server 2016 Hyper-V Compute Resiliency? appeared first on Petri.

What IT Pros Need to Know About Multi-Cloud Security

The content below is taken from the original (What IT Pros Need to Know About Multi-Cloud Security), to continue reading please visit the site. Remember to respect the Author & Copyright.

Brought to you by IT Pro

Having workloads distributed across multiple clouds and on-premises is the reality for most enterprise IT today. According to research by Enterprise Strategy Group, 75 percent of current public cloud infrastructure customers use multiple cloud service providers. A multi-cloud approach has a range of benefits, but it also presents significant challenges when it comes to security.

Security in a multi-cloud world looks a lot different than the days of securing virtual machines, HashiCorp co-founder and co-CTO Armon Dadgar said in an interview with ITPro.

“Our view of security is it needs a new approach from what we’re used to,” he said. “Traditionally, if we go back to the VM world, the approach was sort of what we call a castle and moat. You have your four walls of your data center, there’s a single ingress or egress point, and that’s where we’re going to stack all of our security middleware.”

At this point it was assumed that the internal network was a high-trust environment, and that inside of those four walls, everything was safe. “The problem with that assumption is we got sort of sloppy,” Dadgar said, storing customer data in plaintext and having “database credentials just strewn about everywhere.”

Of course, IT pros can no longer assume that this is the case, and must take a different approach, particularly in a multi-cloud environment.

Cloud connectors, APIs create more entry points for hackers

“Now many of these organizations don’t have one data center. They don’t even have one cloud,” he said. “They may be spanning multiple clouds and within each cloud they have multiple regions, and all of these things are connected through a complex series of VPN tunnels or direct connects where the data centers are connected together on fiber lines, those things are probably tied back to their corporate HQ and the VPN back there. It’s truly a complex network topology where traffic can sort of come from anywhere.”

Dadgar is one of the founders of HashiCorp, which launched in 2012 with the goal of revolutionizing data center management. Its range of tools – which the company has open sourced – manage physical machines and virtual machines, Windows, and Linux, SaaS and IaaS, according to its website. One of these tools, called Vault, “secures, stores, and tightly controls access to tokens, passwords, certificates, API keys, and other secrets in modern computing.”

Dadgar sees Vault as one of the newer tools that security pros are looking to in place of middleware, but it’s not just technology that needs to change in a multi-cloud environment.

“Security practitioners are trying to figure out how to bring security to Wild West situation,” Dadgar said, noting that the approach from the security professional has changed as they need to work closely with developers and operators.

“Security people are being pulled more intimately into application delivery process as well as having to totally recast the set of tools they use, and take more of a service provider approach as opposed to a sort of invisible hand,” he said. “Security has to have a seat at the table, developers and operators have to be aware of it, and there’s a necessary tooling change.”

These changes include ensuring that data is encrypted both at rest and in transit, and taking a hygienic approach to secret management, he said.

“One of the things that kind of protected us in the old world was it was a lot more obvious when you were making a mistake when you physically had to rack and stack servers and move cables around,” Dadgar said. “Now that we’re in the cloud world and everything is an API, it’s not so obvious what’s happening. If I make a slight change to configuration it’s not necessarily obvious that this is bypassing my firewall.”

One example of this is the recent OneLogin security breach where customer data was compromised and hackers were able to decrypt encrypted data. OneLogin, a provider of single sign-on and identity management for cloud-based applications based in San Francisco, said “a threat actor obtained access to a set of AWS keys and used them to access the AWS API from an intermediate host with another, smaller service provider in the US.”

In a post-mortem on its blog, OneLogin said, “Through the AWS API, the actor created several instances in our infrastructure to do reconnaissance. OneLogin staff was alerted of unusual database activity around 9 am PST and within minutes shut down the affected instance as well as the AWS keys that were used to create it.”

Security common sense, policies still have place in multi-cloud 

David Gildea has worked in IT for 17 years, and is the founder and CEO of CloudRanger, a SaaS-based server management provider based in Ireland. He said that enterprises often don’t know that they must take the same precautions with cloud vendors as they do with their data center and on-premise IT. Part of this is ensuring that the vendors they work with have just enough access to get their job done, he said.

“If you have access to this one tool and it gets compromised then it’s a huge problem for enterprises,” Gildea said.

Part of the problem that he sees is that enterprises don’t have the right security policies in place when they enter the cloud, and then the problem is perpetuated as more workloads are spun up across clouds.

“What happens over and over again is you get this proof of concept that turns into a production application and there are no standards or policies set at the very beginning so things start off on a bad footing and that spreads to other clouds,” he said.

Along with the lack of security policies there is also a lack of testing, Gildea said.

“What we see [is that] things just aren’t tested; business continuity for example. Everyone has backups in some way shape or form but what happens is it’s not tested. There is a great assumption there that the cloud is going to do everything for you.”

This article originally appeared on IT Pro.

AWS Named as a Leader in Gartner’s Infrastructure as a Service (IaaS) Magic Quadrant for 7th Consecutive Year

The content below is taken from the original (AWS Named as a Leader in Gartner’s Infrastructure as a Service (IaaS) Magic Quadrant for 7th Consecutive Year), to continue reading please visit the site. Remember to respect the Author & Copyright.

Every product planning session at AWS revolves around customers. We do our best to listen and to learn, and to use what we hear to build the roadmaps for future development. Approximately 90% of the items on the roadmap originate with customer requests and are designed to meet specific needs and requirements that they share with us.

I strongly believe that this customer-driven innovation has helped us to secure the top-right corner of the Leaders quadrant in Gartner’s Magic Quadrant for Cloud Infrastructure as a Service (IaaS) for the 7th consecutive year, earning highest placement for ability to execute and furthest for completeness of vision:

To learn more, read the full report. It contains a lot of detail and is a great summary of the features and factors that our customers examine when choosing a cloud provider.

Jeff;

Gartner names Microsoft Azure as a leader in the Cloud IaaS MQ

The content below is taken from the original (Gartner names Microsoft Azure as a leader in the Cloud IaaS MQ), to continue reading please visit the site. Remember to respect the Author & Copyright.

As customers bet more and more on Cloud to drive digital transformation within their organizations, we’re seeing tremendous usage of Azure. Recently, Forbes reported a study done by Cowen and Company Equity Research, and stated that Microsoft Azure is the most used Public Cloud as well as most likely to be renewed or purchased. More than 90 percent of the Fortune 500 use Microsoft’s cloud services today. Large enterprises such as Shell, GEICO, CarMax and MetLife, as well as smaller companies like Medpoint Digital and TreasuryXpress are all leveraging Azure to fuel business growth and reinvent themselves. We strongly believe that the momentum we’re seeing has been possible because of what Azure offers and stands for – a comprehensive and secure Cloud platform across IaaS and PaaS, unparalleled integration with Office 365, unique hybrid experience with Azure Stack, first-class support for Linux and open-source tooling, and a robust partner ecosystem.

Today we’re delighted that Gartner has recognized Microsoft as a leader in their Cloud Infrastructure as a Service (IaaS) MQ for the fourth consecutive year. We’re excited that Gartner continues to recognize Microsoft for completeness of our vision and ability to execute in this key area.

While we’re honored by our placement in the leaders quadrant for Cloud IaaS, we believe many of our customers choose Microsoft not just for our leadership in this area but because of our leadership across a broad portfolio of cloud offerings spanning Software as a Service (SaaS) offerings like Office 365, CRM Online and Power BI in addition to Azure Platform Services (PaaS). It’s the comprehensiveness of our cloud portfolio that gives customers the confidence that that no matter where they’re in their cloud adoption journey, they’re covered with a breadth of solutions for their problems instead of having to work with multiple vendors.

Here’s the list of cloud-related Gartner MQs where Microsoft is placed in the leader’s quadrant:

Screenshot_3

We look forward to continuously innovating and delivering across our portfolio of cloud offerings, and sincerely believe that every customer – whether big or small, new or seasoned to the Cloud, relying on open-source or otherwise –  has meaningful business value to gain from Azure. If you haven’t dug into Azure yet, here’s an easy way to do it!


If you’d like to read the full report, “Gartner: Magic Quadrant for Cloud Infrastructure as a Service,” you can request it here.

Disclaimer: Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

**Gartner Magic Quadrants

  • Gartner Magic Quadrant for Cloud Infrastructure as a Service; Lydia Leong, Raj Bala, Craig Lowery, Dennis Smith; June 17, 2017
  • Gartner Magic Quadrant for Public Cloud Storage Services, Raj Bala, Arun Chandrasekaran, July 26, 2016
  • Gartner Magic Quadrant for Access Management, Gregg Kreizman, Anmol Singh, June 7, 2017
  • Gartner Magic Quadrant for Business Intelligence and Analytics Platforms, Rita Sallam, Cindi Howson, Carlie Idoine, Thomas Oestreich, James Laurence Richardson, Joao Tapadinhas, February 16, 2017
  • Magic Quadrant for the CRM Customer Engagement Center, Michael Maoz, Brian Manusama, May 8, 2017
  • Magic Quadrant for Data Management Solutions for Analytics, Roxane Edjlali, Adam Ronthal, Rick Greenwald, Mark Beyer, Donald Feinberg, February 20, 2017
  • Magic Quadrant for Enterprise Agile Planning Tools, Thomas E. Murphy, Mike West, Keith James Mann, April 27, 2017
  • Magic Quadrant for Horizontal Portals, Jim Murphy, Gene Phifer, Gavin Tay, Magnus Revang, October 17, 2016
  • Magic Quadrant for Mobile Application Development Platforms, Jason Wong, Van Baker, Adrian Lowe, Marty Resnick, June 12, 2017
  • Magic Quadrant for Operational Database Management Systems, Nick Heudecker, Donald Feinberg, Merv Adrian, Terilyn Palanca, Rick Greenwald, October 5, 2016
  • Magic Quadrant for Sales Force Automation, Tad Travis, Ilona Hansen, Joanne Correia, Julian Poulter, August 10, 2016
  • Magic Quadrant for Unified Communications, Bern Elliot, Megan Marek Fernandez, Steve Blood, July 13, 2016
  • Magic Quadrant for Web Conferencing, Adam Preset, Mike Fasciani, Whit Andrews, November 10, 2016

Be Kind, Rewind With ‘Mixxtape’ Bluetooth Cassette

The content below is taken from the original (Be Kind, Rewind With ‘Mixxtape’ Bluetooth Cassette), to continue reading please visit the site. Remember to respect the Author & Copyright.

“The making of a great compilation tape, like breaking up, is hard to do and takes ages longer than it might seem,” Rob Gordon, melancholy narrator of High Fidelity, explained in the 2000 film.

That principle, however, doesn’t seem to apply to Mixxtape, the Bluetooth music player that looks (and plays) like a cassette.

Since launching on Kickstarter last month, the project quickly surpassed its $10,000 funding goal, now boasting more than $100,000 with four days left.

Nostalgia is at an all-time high, what with choker necklaces, vinyl records, denim jackets, and Polaroid cameras back en vogue.

via Mixxim

So it comes as little surprise that Dallas-based Mixxim has caught the attention of music-loving hipsters and sentimental Gen Xers looking to relive the glory days of magnetic tape and analog signals.

Modeled after the compact cassettes we all knew and loved, Mixxtape comes with some modern amenities—touch navigation, LCD display, Bluetooth, rechargeable battery, USB port, and headphone jack.

Simply connect the cartridge to a PC or Mac to transfer your favorite tunes. Then jam out at home or on the go with wired and wireless headphones or speakers. Or, dust off that Walkman, boombox, or tape deck for a classic experience.

via Mixxim

There is still time to snag a Mixxtape, available as a single $40 cassette (50 percent off the retail price), or in packs of two ($70), three ($90), five ($125), and 25 ($625).

Buyers get the Mixxtape, 8GB MicroSD card (expandable up to 64GB), micro USB charging cable, and jewel carrying case. Each cassette also comes preloaded with the “Mountain Sounds” album by Texas duo Danni & Kris, as well as a “various indie artists” playlist.

Choose between a traditional black finish or the stretch-goal white edition just begging to be personalized.

via Mixxim

Other rewards include a Mixxtape necklace and limited-edition T-shirt. Those folks looking to catch a break in the music industry can pledge $350 to get one track featured on all Mixxtapes that ship in 2017 (an estimated 2,000 to 5,000 units). For $2,000, meanwhile, you can promote an entire album (up to 25 songs).

The first batch of Mixxtape cassettes are expected to ship this November.

OpenStack Developer Mailing list Digest June 10-16

The content below is taken from the original (OpenStack Developer Mailing list Digest June 10-16), to continue reading please visit the site. Remember to respect the Author & Copyright.

Summaries

  • TC report 24 by Chris Dent 1
  • Release countdown for week R-10 and R-9, June 16-30 by Thierry 2
  • TC Status Update by Thierry 3.

Making Fuel a Hosted Project

  • Fuel originated from Mirantis as their OpenStack installer.
  • Approved as an official OpenStack project November 2015.
  • The goal was to get others involved to make one generic OpenStack installer.
  • In Mitaka and Newton it represented more commits than Nova.
  • While the Fuel team embraced open collaboration, it failed to attract other organizations.
  • Since October 2016 Fuel’s activity dropped from it’s main sponsor.
    • 68% drop between 2016 and 2017.
    • Project hasn’t held a meeting for three months.
    • Activity dropped from ~990 commits/month (April 2016, August 2016) to 52 commits in April 2017 and 25 commits May 2017.
  • Full thread: 4

Moving Away from “big tent” Terminology

  • Back in 2014 our integrated release was not really integrated, too big to be installed by everyone, yet too small to accommodate the growing interest in other forms of “open infrastructure”.
  • Incubation process created catch-22’s.
  • Project structure reform 4 discussions switched us to a simpler model: project teams would be approved based on how well they’d it the OpenStack overall mission and community principles rather than maturity.
    • Nick named the big tent 5
  • It ended up mostly creating confusion due to various events and mixed messages which we’re still struggling with today.
  • This was discussed during a TC office hour in channel openstack-tc 6
  • There is still no agreement on how to distinguish official and unofficial projects. The feedback in this thread will be used to assist the TC+UC+Board sub group on better communicating what is OpenStack.
  • Full thread: 7

 

[1] – http://bit.ly/2sacWbL

[2] – http://bit.ly/2rFhMdN

[3] – http://bit.ly/2saqYKR

[4] – http://bit.ly/2sajUOh

[5] – http://bit.ly/2rEZc5v

[6] – http://bit.ly/2sasoER

[7] – http://bit.ly/2rEZadT

#openstack #openstack-dev-digest

We could soon be painting our houses with ‘solar paint’ for clean energy – “researchers in Australia have come up with a “solar paint” capable of absorbing moisture from the air and turning it into hydrogen fuel for clean energy.”

The content below is taken from the original (We could soon be painting our houses with ‘solar paint’ for clean energy – “researchers in Australia have come up with a “solar paint” capable of absorbing moisture from the air and turning it into hydrogen fuel for clean energy.”), to continue reading please visit the site. Remember to respect the Author & Copyright.

http://bit.ly/2t91Tye

Bottlenecks of Modern Data Storage Technologies

The content below is taken from the original (Bottlenecks of Modern Data Storage Technologies), to continue reading please visit the site. Remember to respect the Author & Copyright.

We all know that the amount of data both globally in the world and on our personal computers, in particular, is growing constantly – day by day. The need for new data storage technologies exists and they appear with reasonable regularity.

 

 

Let’s consider storage technologies focusing on the internal organization of data storage – going from the hardware level of a physical hard drive to the logical level of a file system layout.

Basic Concepts

The problem, which users encountered almost from the moment the personal computers were designed, was that the capacity of a single disk, to put it mildly, was not very large – just imagine that in 1991 IBM introduced a disk with the “unimaginable” size… 1004 MB! The disk was composed of eight 3.5-inch platters.

This situation with disk capacities will always remain the same — hard disk vendors are constantly trying to design larger capacity disks, but the sizes (as well as prices) do not satisfy the user needs.

Back in those days, it became clear that a technology that allows you to combine several disks into a single storage space was needed. That’s how the RAID (Redundant Array of Independent (or Inexpensive) Disks) technology was invented. The technology is based on combining different disks — from different vendors, with different characteristics — into a single storage option using the mirror, stripe, and parity techniques.

  • Mirror (RAID1) – array member disks store the identical copies of data. To be honest, a mirror does not provide a large volume of storage, but the technology provides fault tolerance and is widely used in the nested array levels like RAID10.
  • Stripe (RAID0) – disk space on the array member disks is cut into the blocks of the same size and then data is written to the blocks according to a specific pattern.
  • Parity (RAID5) – disk space is also cut into the blocks but one block in each row is used for storing parity data – a derivative, usually XOR function, calculated over data blocks. This approach ensures redundancy, allowing a storage system to survive a disk failure without data loss.

All other storage technologies are, by and large, variations of the RAID technology. Let’s look more closely at some of the most popular ones.

Drobo BeyondRAID

The Drobo developers designed their own technology to be used in the Drobo devices – Drobo BeyondRAID. The technology was released a long time ago, in 2007, and uses the same RAID technique. However, this is not a “pure” RAID – RAID arrays in Drobo are created over the regions, which are several tens of megabytes in size, rather than over the whole hard drive, as is done in a traditional RAID. Next, the regions are combined into larger structures, called zones, from which file system clusters are then allocated.

Sounds difficult but still not bad…but there’s one more thing – a cluster map storing information on which cluster is located in which zone. The very cluster map is stored in a special zone as well — the map is fully integrated into the Drobo layout.

If the cluster map is lost or overwritten, data recovery doesn’t make sense because the cluster size is too small – all you get by analyzing data on a Drobo disk pack are a lot of meaningless data fragments.

Sponsored

Storage Spaces

In 2012, Microsoft released the Storage Spaces technology, allowing you to combine disks of different sizes into the pools, on which then it is possible to create spaces of five types — simple, mirror, 3-way mirror, parity, and double parity.

In Storage Spaces, disk space on the disks is cut into 256 MB slabs, which are then combined into spaces of a given type using the appropriate RAID technique. Similar to the Drobo BeyondRAID having a cluster map as a bottleneck, there is a table storing information on which slabs are combined into which RAIDs. If the table is lost, it is very difficult to restore a Storage Spaces layout. In the case of a mirror or parity space, there is at least theoretical background to recover a lost RAID configuration; for a simple layout, the configuration is lost irreversibly.

Another more interesting observation is that the modern trend is pools and spaces of 30-50 TB in size. Taking into account the slab size of 256 MB and 8-column layout, we have a maximum of 256 * 7 = 1.8 GB (for a parity space) per RAID. It is not difficult to imagine that the number of such micro-RAIDs in a Storage Spaces pool can easily reach tens of thousands!

NAS Devices

Network Attached Storage is a storage device that allows you to combine several hard disks into one large storage space, which is then connected to a computer network. The simplest NAS layout uses the following scheme – physical disks are combined into two RAIDs – one RAID is a small RAID1 for storing NAS firmware and another large RAID (usually RAID5) is for user data. These RAIDs are created by the Linux md-RAID driver and typically one of the Linux file systems is used, usually ext or XFS. The scheme implies using disks of the same size – you can still use disks of different sizes but the overall capacity will be tightly tied to the size of the smallest disk.

Nowadays, schemes of combining disks are more complicated. They allow using disks of different sizes without disk space loss and, more than that, it is possible to replace disks with the larger ones without full-scale rebuilding. One example is Synology NASes, where you can use completely different disks thanks to the Synology Hybrid RAID technology.

At this, NAS vendors did not stop and first NETGEAR and then Synology started using a new file system – BTRFS. The file system differs from a “pure” file system, which knows nothing about the underlying physical layout, in that it combines both the functions of the disk space allocator and the file system driver.

Inside, BTRFS operates with not very large of blocks (1GB for user data and minimum 4 MB for metadata), which are called chunks. Chunks are made from the blocks allocated on the physical disks and combined into larger continuous blocks using the RAID technology.

Until recently, chunks were combined into the blocks using the JBOD technique, but now BTRFS supports RAID5 and RAID6 with the 64 KB stripe size. With BTRFS, it is possible to have multi-layer RAID configurations like those where there is a main RAID5 created over the physical disks and multiple sub-RAIDs created over chunks allocated from the main RAID.

The information on how chunks are composed is stored in the file system metadata. Should the metadata be lost, nothing can be recovered.

ReFS

ReFS filesystem was released simultaneously with the Microsoft Storage Spaces technology. The first ReFS version was quite simple – a file record stored the file attribute data such as file size, creation/modification times, and the information about the file content location. A directory record contained information about files and folders belonging to the directory.

In 2016, Microsoft released ReFS of the next generation. Surprisingly, the modern ReFS now has an analogous of a cluster map. On the one hand, there are several copies of the map stored on a volume; on the other hand, file deletion results in discarding the corresponding records in the cluster map. Deleted file data can still be on the disks but you cannot get it extracted because cluster location is not available any longer.

Sponsored

Conclusion

Obviously, with increasing disk capacity, new schemes of storing data are needed. The modern technologies still use the old good RAID, but there is a tendency to have many small RAIDs along with the cluster maps. RAID configuration metadata and cluster maps are the main bottlenecks of the modern storage technologies.

Written by Elena Pakhomova of www.ReclaiMe.com, offering data recovery solutions for a wide variety of data storage technologies.

 

The post Bottlenecks of Modern Data Storage Technologies appeared first on Petri.

This Custom Built “Commute Deck” Makes it Easy to Work on the Go

The content below is taken from the original (This Custom Built “Commute Deck” Makes it Easy to Work on the Go), to continue reading please visit the site. Remember to respect the Author & Copyright.

The Commute Deck is designed to provide a productive computing experience for UNIX terminal work in tight places, like a train or airplane.

Read more on MAKE

The post This Custom Built “Commute Deck” Makes it Easy to Work on the Go appeared first on Make: DIY Projects and Ideas for Makers.

Microsoft Pix can now turn your iPhone photos into art, thanks to A.I.

The content below is taken from the original (Microsoft Pix can now turn your iPhone photos into art, thanks to A.I.), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft is rolling out an update to its AI-powered photo editing app, Microsoft Pix, that aims to give Prisma and others like it some new competition. While the app was originally designed to enhance your iPhone photos by tweaking things like color, exposure and other variables, the newly updated Microsoft Pix will now let you have a little more fun with your photos, too – this time, by turning them into art.

Similar to Prisma, the new app introduces a feature called Pix Styles, which allows you to transform your photos into works of art, and use other effects. For example, one effect will make the picture look like it’s on fire. These are not photo filters, to be clear – the styles actually transfer texture, pattern and tones to the photo, explains Microsoft.

The app launches today with 11 styles included, but more will be added in the weeks ahead, the company says.

Also like Prisma, you can swipe your finger across the style to increase or reduce the effect. When you’re done, you can frame the photo, crop it, or share it out to social networks, as before.

Another new feature – Pix Paintings – takes a step beyond Prisma, Lucid, Pikazo, Dreamscope and other “photo-to-art” apps, as it lets you see a timelapse of the photo being painted in the artistic style you selected. This is more entertaining than it is practical, but it’s a nifty trick.

Microsoft says that the new features were developed in collaboration with Microsoft’s Asia research lab and Skype, and leverage an A.I. processing approach called deep neural networks. This is what’s used to train large datasets. For Pix, that means lots of paintings were used to train the A.I. in order to learn the various styles.

It’s also the same technology that Google experimented with in order to produce a new kind of trippy, machine-created art – some of which it showed off at an exhibit last year.

“These are meant to be fun features,” said Josh Weisberg, a principal program manager in the Computational Photography Group within Microsoft’s research organization in Redmond, in an announcement. “In the past, a lot of our efforts were focused on using AI and deep learning to capture better moments and better image quality. This is more about fun. I want to do something cool and artistic with my photos,” he says.

Also worth noting is that these new features can be used without tapping into your phone’s data plan, or while your phone is offline. That’s because Pix works directly on your device itself to run its calculations – it doesn’t need to access the cloud. This is part of a broader effort at Microsoft to shift A.I. from the cloud to devices at the edge of the network, the company says.

The app is a free download on the App Store. 

HP is turning trash into printer cartridges

The content below is taken from the original (HP is turning trash into printer cartridges), to continue reading please visit the site. Remember to respect the Author & Copyright.

All those printer cartridges from HP that usually cost an arm and leg will start helping to do some good in the world beyond your prints of kitten photos. During an event at its headquarters, HP announced that it is using recycled plastic from Haiti to manufacture select cartridges.

The initiative will help create jobs in Haiti and provide educational opportunities and scholarships for children. More importantly, its goal is to get the kids who are collecting recycled bottles out of landfills and into schools. Plus, it helps support their parents and other adults with safety and job training. The partnership will also help provide medical care.

HP is teaming up with Thread, a company that already uses recycled bottles from Haiti and Honduras to create clothes. The fabric it produces is used by Timberland and Kenneth Cole. In addition to cleaning up the world and helping create a job market, Thread is trying to reduce child labor by creating an environment that employs older family members. Part of that includes starting a coalition that HP is part of.

The First Mile coalition which includes HP, Thread, Timberland, Team Tassy and ACOP helps get kids in school in addition to offering employment opportunities for adults and medical care. Of course, it also reduces the amount of plastic bottles that end up in landfills and in our oceans. So maybe paying those high ink prices is worth it.

Source: HP

You are Go for FPGA!

The content below is taken from the original (You are Go for FPGA!), to continue reading please visit the site. Remember to respect the Author & Copyright.

You are Go for FPGA!

Reconfigure.io is accepting beta applications for its environment to configure FPGAs using Go. Yes, Go is a programming language, but the software converts code into FPGA constructs, so you don’t need Verilog or VHDL. Since Go supports concurrent routines and channels for synchronization and communications, the parallel nature of the FPGA should fit well.

According to the project’s website, the tool also allows you to reconfigure the FPGA on the fly using a cloud-based build and deploy system. There isn’t much detail yet, unless you get accepted for the alpha. They claim they’ll give priority to the most interesting use cases, so pitching your blinking LED project probably isn’t going to cut it. There is a bit more detail, however, on their GitHub site.

We’ve seen C compilers for FPGAs (more than one, in fact). You can also sort of use Python. Is this tangibly different? It sounds like it might be, but until the software emerges completely, it is too early to tell. Meanwhile, if you want a crash course on conventional FPGA design, you can get some hardware for around $25 and be on your way.

Posted in FPGATagged , , ,

Some Research into Windows Product Keys

The content below is taken from the original (Some Research into Windows Product Keys), to continue reading please visit the site. Remember to respect the Author & Copyright.

To most of us, Windows product keys are just an intransparent sequence of characters and numbers.

If you have never seen a product key before, they look like this example, which is a KMS client key for Windows 10 Enterprise 2016 LTSB N[1], now:

QFFDN-GRT3P-VKWWX-X7T3R-8B639 

However, we'll be looking at their history to better understand what is going on.

Please note that this post is incomplete and I am not an authority on product keys. If you know more than I do, please feel free to contribute and I'll update the post accordingly.

MS-DOS, Windows 1, 2, 3, NT 3

There may be serial numbers somewhere on the packaging for MS-DOS, Windows 1, Windows 2, Windows 3.0 through Windows For Workgroups 3.11 and Windows NT 3.1 through 3.51, but I have never seen them.

Most of these operating systems don't prompt for a serial number during installation. Others do, but the entered value is not validated in any way.

Windows 95, NT 4.0

Starting with Windows 95 and NT 4.0, product keys ("CD keys") started to be used and validated during installation. There are two kinds of keys:

  1. OEM (original equiment manufacturer, e.g. Dell) keys
  2. retail keys

This same system appears to have been used by other products from the same era, such as Microsoft SQL Server 7.0.

Both kinds of keys are actually validated during the installation procedure.

OEM Keys

OEM keys look like this:

12345-OEM-0012345-12345 

The leading zeroes after -OEM- are always there for some reason. That part must start with at least one zero, though. The second zero might just be a side-effect of monotonically incrementing the serial number, never reaching sufficiently large values.

These are used for preinstalled versions of Windows.

It's unknown what the individual parts stand for, other than the part right after -OEM- referring to the serial number; the first zero is not part of the serial number.

The validation of the serial number is the same for OEM and retail. It is absolutely trivial.

Retail Keys

Retail keys look like this:

123-4567890 

The retail key is visibly separated into two parts:

  1. site identifier (site ID)
  2. serial number

The site identifier appears to refer to the site where the respective product was manufactured. The serial number is exactly that: a serial number.

Windows 98, 2000

Starting with Windows 98 and Windows 2000, today's format of product keys has been introduced, which means 25 characters.

The keys are encoded in base 24 using this alphabet:

BCDFGHJKMPQRTVWXY2346789 

They look similar to the example I gave initially, but missing N from the encoding alphabet.

A DLL called pidgen.dll is used to validate the keys and generate installation IDs. It varies between different editions of the operating system and between OEM and retail.

Windows XP

Windows XP follows the same encoding as Windows 98 and 2000.

According to a paper by Fully Licensed GmbH[2], the base 24 encoding stayed the same.

The decoded product key (i.e., the raw binary of the key, converted from base 24) contains two parts:

  1. a composite number
  2. a digital signature over the composite number

From the paper's example, the product key FFFFF-GGGGG-HHHHH-JJJJJ-KKKKK gets decoded (after some rearrangement and a bit shift) to a composite number of 583728439.

The paper also states that an installation ID for that product key would look something like this:

55034-583-7284392-00123 

This looks very reminiscent of the Windows 95 OEM key format, except "OEM" was replaced with "583". It thus appears reasonable to assume that the format of the site ID was kept and the composite number indeed is the same as the CD key for retail in 95 and NT 4.0.

Therefore, the decoded product composite number contains:

  1. site identifier (site ID)
  2. serial number

At some point, the site IDs became called "channel IDs" instead[3].

Looking at a few keys available to me for XP, it appears that the validation of the serial number itself works still the same as in 95 and NT 4.0, too.

This highly suggests that Windows 98, 2000 and XP all share the same product key validation algorithm. Windows XP also includes a file called pidgen.dll.

The signature over the composite number appears to be a Schnorr signature[4] over a truncated SHA-1 hash of the composite number.

Windows Vista and 7

Windows Vista introduced a new handler for product IDs called pidgenx.dll. Instead of shipping a different pidgen.dll for each operating system distribution, a file called pkeyconfig.xrm-ms is used to determine what edition a product key belongs to and its SKU (stock keeping unit).

The concept of digital signatures has been kept, but each group of product keys can potentially have a different signing key to sign the composite number.

The validation of the serial number component is still unchanged from Windows 95 and Windows NT 4.0.

Either Windows 7 or Windows Vista started using bcrypt as hash function for the digital signature.

It is unknown whether Schnorr signatures continued to be used since Vista or if a different algorithm was picked for signing.

Windows 8, 8.1 and 10

Going by the characters in the KMS client keys[1], Windows 8 and beyond changed the product key alphabet. It now appears to be base 25 with the following alphabet (the newcomer being N):

BCDFGHJKMNPQRTVWXY2346789 

The pkeyconfig.xrm-ms system has been kept from Vista.

The validation of the serial number component appears to have changed for the first time since Windows 95 and Windows NT 4.0. It is unknown whether the validation was dropped entirely or the algorithm changed.

It is unknown if the bcrypt hash has seen continued use or if a different function has taken its place. The same uncertainty about Schnorr signature as for Vista and 7 applies here.

Starting with Windows 8, the format of installation IDs has changed, but the core system of a site ID and serial number has not changed.

Questions

If you have any questions, feel free to ask, though I may not be able to answer them as my own knowledge is very shoddy and I have hit a wall on my research.

References

[1] http://bit.ly/2rx3XxK

[2] http://bit.ly/2s2i2qB

[3] http://bit.ly/2rx51C3

[4] http://bit.ly/2s2bcRP

Google Drive will soon back up any file or folder on your computer

The content below is taken from the original (Google Drive will soon back up any file or folder on your computer), to continue reading please visit the site. Remember to respect the Author & Copyright.

If you use a cloud storage app on your PC, there’s a good chance you use it as much for backing up your system as you do for accessing files on different devices. And Google knows it. The internet giant is reinventing its Drive desktop app as Backup and Sync, a tool that… well, just look at the name. While it largely accomplishes the same tasks, it’s now focused more on safeguarding your data, regardless of where it’s stored on your computer. That includes pictures, too — the updated software integrates the Google Photos desktop app, so you only need one app to sync it all. The only real limitation is the amount of Drive space you have… that 15GB free tier might not cut it.

Backup and Sync will launch June 28th for Mac and Windows users. At the moment, though, it’s not meant for business — Google would rather point you to its upcoming Drive File Stream if you rely on its cloud services for work. If all you need is a safety net for your personal documents, though, you won’t have too much longer to wait.

Source: G Suite Updates

Alibaba to Use Own Immersion Cooling Tech in Cloud Data Centers

The content below is taken from the original (Alibaba to Use Own Immersion Cooling Tech in Cloud Data Centers), to continue reading please visit the site. Remember to respect the Author & Copyright.

The cloud computing arm of China’s e-commerce giant Alibaba Group is developing a data center cooling system that submerges server motherboards in liquid coolant to take advantage of liquid’s superior heat-transfer capabilities when compared to air.

The company said this week it expects this approach to increase power density and save space inside the data centers it is building around the world to expand market reach, as it competes with the likes of Amazon Web Services and Microsoft Azure, who are continuing to build out their already massive global cloud data center networks. It expects the solution’s energy efficiency improvements to result in 20 percent lower data center operational costs.

Alibaba said it plans to contribute the technology to the Open Compute Project, an open source hardware and data center design effort started about six years ago by Facebook. The Chinese company officially joined OCP this week.

Its data center cooling technology has reached production stage, the company said, and will soon be ready for deployment in Alibaba’s cloud data centers.

The concept of submerging servers in dielectric fluid to improve data center cooling efficiency isn’t new. There are several companies selling solutions that use it on the market, the more prominent examples being Green Revolution Cooling and Iceotope.

See also: How Practical is Dunking Servers in Mineral Oil Exactly?

Alibaba hasn’t revealed much detail about its particular solution, saying only that it “involves an immersed, liquid-cooling server solution that uses insulating coolant instead of traditional air-cooling equipment. The coolant absorbs the heat of the components before turning into gas, which is then liquefied back into the main cabinet for reuse.”

Because the technology doesn’t require massive air conditioning systems present in most of the world’s data centers, Alibaba’s immersion cooling technology “can be deployed anywhere, delivering space savings of up to 75 percent,” the company said.

Manage Instances at Scale without SSH Access Using EC2 Run Command

The content below is taken from the original (Manage Instances at Scale without SSH Access Using EC2 Run Command), to continue reading please visit the site. Remember to respect the Author & Copyright.

The guest post below, written by Ananth Vaidyanathan (Senior Product Manager for EC2 Systems Manager) and Rich Urmston (Senior Director of Cloud Architecture at Pegasystems) shows you how to use EC2 Run Command to manage a large collection of EC2 instances without having to resort to SSH.

Jeff;


Enterprises often have several managed environments and thousands of Amazon EC2 instances. It’s important to manage systems securely, without the headaches of Secure Shell (SSH). Run Command, part of Amazon EC2 Systems Manager, allows you to run remote commands on instances (or groups of instances using tags) in a controlled and auditable manner. It’s been a nice added productivity boost for Pega Cloud operations, which rely daily on Run Command services.

You can control Run Command access through standard IAM roles and policies, define documents to take input parameters, control the S3 bucket used to return command output. You can also share your documents with other AWS accounts, or with the public. All in all, Run Command provides a nice set of remote management features.

Better than SSH
Here’s why Run Command is a better option than SSH and why Pegasystems has adopted it as their primary remote management tool:

Run Command Takes Less Time –  Securely connecting to an instance requires a few steps e.g. jumpboxes to connect to or IP addresses to whitelist etc. With Run Command, cloud ops engineers can invoke commands directly from their laptop, and never have to find keys or even instance IDs. Instead, system security relies on AWS auth, IAM roles and policies.

Run Command Operations are Fully Audited – With SSH, there is no real control over what they can do, nor is there an audit trail. With Run Command, every invoked operation is audited in CloudTrail, including information on the invoking user, instances on which command was run, parameters, and operation status. You have full control and ability to restrict what functions engineers can perform on a system.

Run Command has no SSH keys to Manage – Run Command leverages standard AWS credentials, API keys, and IAM policies. Through integration with a corporate auth system, engineers can interact with systems based on their corporate credentials and identity.

Run Command can Manage Multiple Systems at the Same Time – Simple tasks such as looking at the status of a Linux service or retrieving a log file across a fleet of managed instances is cumbersome using SSH. Run Command allows you to specify a list of instances by IDs or tags, and invokes your command, in parallel, across the specified fleet. This provides great leverage when troubleshooting or managing more than the smallest Pega clusters.

Run Command Makes Automating Complex Tasks Easier – Standardizing operational tasks requires detailed procedure documents or scripts describing the exact commands. Managing or deploying these scripts across the fleet is cumbersome. Run Command documents provide an easy way to encapsulate complex functions, and handle document management and access controls. When combined with AWS Lambda, documents provide a powerful automation platform to handle any complex task.

Example – Restarting a Docker Container
Here is an example of a simple document used to restart a Docker container. It takes one parameter; the name of the Docker container to restart. It uses the AWS-RunShellScript method to invoke the command. The output is collected automatically by the service and returned to the caller. For an example of the latest document schema, see Creating Systems Manager Documents.

{
  "schemaVersion":"1.2",
  "description":"Restart the specified docker container.",
  "parameters":{
    "param":{
      "type":"String",
      "description":"(Required) name of the container to restart.",
      "maxChars":1024
    }
  },
  "runtimeConfig":{
    "aws:runShellScript":{
      "properties":[
        {
          "id":"0.aws:runShellScript",
          "runCommand":[
            "docker restart "
          ]
        }
      ]
    }
  }
}

Putting Run Command into practice at Pegasystems
The Pegasystems provisioning system sits on AWS CloudFormation, which is used to deploy and update Pega Cloud resources. Layered on top of it is the Pega Provisioning Engine, a serverless, Lambda-based service that manages a library of CloudFormation templates and Ansible playbooks.

A Configuration Management Database (CMDB) tracks all the configurations details and history of every deployment and update, and lays out its data using a hierarchical directory naming convention. The following diagram shows how the various systems are integrated:

For cloud system management, Pega operations uses a command line version called cuttysh and a graphical version based on the Pega 7 platform, called the Pega Operations Portal. Both tools allow you to browse the CMDB of deployed environments, view configuration settings, and interact with deployed EC2 instances through Run Command.

CLI Walkthrough
Here is a CLI walkthrough for looking into a customer deployment and interacting with instances using Run Command.

Launching the cuttysh tool brings you to the root of the CMDB and a list of the provisioned customers:

% cuttysh
d CUSTA
d CUSTB
d CUSTC
d CUSTD

You interact with the CMDB using standard Linux shell commands, such as cd, ls, cat, and grep. Items prefixed with s are services that have viewable properties. Items prefixed with d are navigable subdirectories in the CMDB hierarchy.

In this example, change directories into customer CUSTB’s portion of the CMDB hierarchy, and then further into a provisioned Pega environment called env1, under the Dev network. The tool displays the artifacts that are provisioned for that environment. These entries map to provisioned CloudFormation templates.

> cd CUSTB
/ROOT/CUSTB/us-east-1 > cd DEV/env1

The ls –l command shows the version of the provisioned resources. These version numbers map back to source control–managed artifacts for the CloudFormation, Ansible, and other components that compose a version of the Pega Cloud.

/ROOT/CUSTB/us-east-1/DEV/env1 > ls -l
s 1.2.5 RDSDatabase 
s 1.2.5 PegaAppTier 
s 7.2.1 Pega7 

Now, use Run Command to interact with the deployed environments. To do this, use the attach command and specify the service with which to interact. In the following example, you attach to the Pega Web Tier. Using the information in the CMDB and instance tags, the CLI finds the corresponding EC2 instances and displays some basic information about them. This deployment has three instances.

/ROOT/CUSTB/us-east-1/DEV/env1 > attach PegaWebTier
 # ID         State  Public Ip    Private Ip  Launch Time
 0 i-0cf0e84 running 52.63.216.42 10.96.15.70 2017-01-16 
 1 i-0043c1d running 53.47.191.22 10.96.15.43 2017-01-16 
 2 i-09b879e running 55.93.118.27 10.96.15.19 2017-01-16 

From here, you can use the run command to invoke Run Command documents. In the following example, you run the docker-ps document against instance 0 (the first one on the list). EC2 executes the command and returns the output to the CLI, which in turn shows it.

/ROOT/CUSTB/us-east-1/DEV/env1 > run 0 docker-ps
. . 
CONTAINER ID IMAGE             CREATED      STATUS        NAMES
2f187cc38c1  pega-7.2         10 weeks ago  Up 8 weeks    pega-web

Using the same command and some of the other documents that have been defined, you can restart a Docker container or even pull back the contents of a file to your local system. When you get a file, Run Command also leaves a copy in an S3 bucket in case you want to pass the link along to a colleague.

/ROOT/CUSTB/us-east-1/DEV/env1 > run 0 docker-restart pega-web
..
pega-web

/ROOT/CUSTB/us-east-1/DEV/env1 > run 0 get-file /var/log/cfn-init-cmd.log
. . . . . 
get-file

Data has been copied locally to: /tmp/get-file/i-0563c9e/data
Data is also available in S3 at: s3://my-bucket/CUSTB/cuttysh/get-file/data

Now, leverage the Run Command ability to do more than one thing at a time. In the following example, you attach to a deployment with three running instances and want to see the uptime for each instance. Using the par (parallel) option for run, the CLI tells Run Command to execute the uptime document on all instances in parallel.

/ROOT/CUSTB/us-east-1/DEV/env1 > run par uptime
 …
Output for: i-006bdc991385c33
 20:39:12 up 15 days, 3:54, 0 users, load average: 0.42, 0.32, 0.30

Output for: i-09390dbff062618
 20:39:12 up 15 days, 3:54, 0 users, load average: 0.08, 0.19, 0.22

Output for: i-08367d0114c94f1
 20:39:12 up 15 days, 3:54, 0 users, load average: 0.36, 0.40, 0.40

Commands are complete.
/ROOT/PEGACLOUD/CUSTB/us-east-1/PROD/prod1 > 

Summary
Run Command improves productivity by giving you faster access to systems and the ability to run operations across a group of instances. Pega Cloud operations has integrated Run Command with other operational tools to provide a clean and secure method for managing systems. This greatly improves operational efficiency, and gives greater control over who can do what in managed deployments. The Pega continual improvement process regularly assesses why operators need access, and turns those operations into new Run Command documents to be added to the library. In fact, their long-term goal is to stop deploying cloud systems with SSH enabled.

If you have any questions or suggestions, please leave a comment for us!

— Ananth and Rich