Contain yourself – StorageOS is coming

The content below is taken from the original (Contain yourself – StorageOS is coming), to continue reading please visit the site. Remember to respect the Author & Copyright.

DockerCon StorageOS is a UK-based startup offering simple and automated block storage to stateless containers, giving them state and the means to run databases and other applications that need enterprise-class storage functionality without the concomitant complexity, rigidity and cost.

It runs as a container in a Linux system and provides plug-ins for other containers to use that give them easy, programmatic access to stateful storage services.

The company was founded by CEO Chris Brandon, CTO Alex Chircop, product management VP Simon Croome, and engineering VP James Spurin. It is Brandon’s fifth startup, others he has worked with being GreenBytes and Xsigo – both bought by Oracle – where he led the technical teams.

Chircop has been global head of storage platform engineering at Goldman Sachs and head of infrastructure platform engineering at Nomura International; about as strong a traditional enterprise IT background as you could wish for.

Croome led global engineering teams at Fidelity and Nomura, and web development at UBS in London. Spurin was previously the block storage product manager for Goldman Sachs and the technical lead for storage engineering at Nomura. We see lots of experience in enterprise IT and storage here.

These four got together and after a year’s discussion of product technology needs, architecture and design, set up StorageOS in 2015, with private investor support.

What’s it about?

What they want to provide is an enterprise-class storage platform that is simpler, faster, easier, and lower-cost to use than legacy IT storage, which they characterise as needing storage admins, slow, complex, costly and rigid. They want to provide automated storage provisioning to containers which can be instantiated and torn down many thousands of times a day.

Brandon says the company is deliberately not located in Silicon Valley, which venture capitalists expect, because Silicon Valley has an insular environment. There is lots of innovation in London, England, where several other container-focussed startups exist.

The founder’s thesis is that manually provided and managed storage is obviously not practical for containerised and DevOps-type environments. So, Brandon said, “We built a toolset for people to store data in containers.”

It is agnostic about the underlying platform: bare metal, containerised, virtualised server or the cloud. The company says StorageOS, the product, “is an ultra-low entry point, full enterprise functionality storage array that is integrated with VMware, Docker, AWS, and Google Cloud.”

StorageOS is focussed on containers for now because that is where the largest initial opportunity is located. It has four focus areas in the container arena:

  1. Stateful containers for database and fast database recovery.
  2. Secure cloud mobility and cost-reduction.
  3. Performance acceleration and volume management.
  4. Continuous application integration and delivery.
StorageOS_schematic

StorageOS diagram

How it works

The product works like this: it installs as a container under Linux or a containerised OS such as CoreOS. It locates the storage accessible by its host node: direct-attached, network-attached and cloud-attached, and connected nodes. This is aggregated into a virtual, multi-node pool of block storage. Volumes are then carved out for accessing containers, (thinly) provisioned, mounted, and a database can be loaded and started. This is done in single-digit seconds, typically 2 or 3.

Accessing containers use a StorageOS plug-in from Docker or Kubernetes to “see” the StorageOS container, and have their storage provisioning automated, accelerated and simplified.

The back-end storage itself is not accessed unless data needs to be read or written.

On top of this basic provisioning there are bells and whistles adding enterprise storage-class functionality:

  • Rules engine – policies specifying data placement, protection, etc, which can be modified.
  • Data placement – use different types of media for different kinds of data for optimum (speed, cost) placement.
  • Encryption – safeguard data at rest and in flight.
  • Caching – accelerate data access with DRAM and flash caching.
  • Replication – protect data by moving blocks to remote site.
  • High availability – failover to second node of host node fails.
  • Deduplication and compression.
  • Quality of service (QoS).
  • Migration.
  • Clustering.

Brandon says: “There is a lot of power in the rules engine – far beyond what a traditional storage array would provide.”

StorageOS encourages performance by running storage for databases that runs on the same node as the app and provides local caching.

The QoS comes in two forms. Basic QoS is about not exceeding thresholds for IOPS and throughput while enterprise QoS is more sophisticated, using a fair use scheduler to balance out QoS across different services and remedy the noisy neighbour problem. With the QoS features, Brandon says, admins don’t have to hand-tweak containers.

Licensing

This set of features is variably distributed across a free edition of the software and two paid-for editions:

  • Freemium – download at no cost and run on, for example, a laptop, and experiment with product.
  • Professional Version adds clustering, high-availability and caching in DRAM and flash (acceleration), deduplication, compression.
  • Enterprise Edition Professional Feature provides replication, encryption, migration and QoS.

Volume-based pricing will also be available.

StorageOS provides cost-reduction for public cloud users. For example, with replication in Amazon you have a compute instance running in the source environment and a second running in the target environment. With StorageOS, data is replicated to the target with no need for any compute instance there, until a fail-over occurs. This saves money.

StorageOS is not limited to use by containers – it being a general and software-defined storage provisioning platform for virtualised servers and the cloud. It will have iSCSI and Fibre Channel support added, being a general software-defined storage product if customers want to treat it that way. The beta product is being released now and here is what it broadly looks like:

StorageOS_beta

StorageOS beta product

Support for free software version customers will involve a forum and email, while paid-for customers can buy next-day and 24×7 4-hour support services.

Ecosystem

StorageOS has joined the Linux Foundation and is a board member of the Cloud Native Computing Foundation (CNCF), where it is involved in setting up a Storage SIG and has a Kubernetes plug-in. It has also been accepted as a Docker Alliance Partner.

A future release will add iSCSI and Fibre Channel support.

Chircop thinks: “The container market is on the verge of being enterprise-ready,” there being a vast ecosystem of developers and products pumping huge momentum into the area. OpenStack supports containers. So too does VMware, with Photon. NetApp has a container plug-in and its SolidFire unit is involved with DockerCon, as will IBM.

In the fast-moving container compute environment, traditional storage is too slow, too awkward, too complex and over-expensive. It has to have lower cost and automated provisioning yet, for enterprise container use, it must also have enterprise-class data services. That balance of features is what StorageOS intends to provide.

StorageOS is available now – http://bit.ly/28O2USC. The Professional Version costs less than $30/month. The beta test is this month and full product GA is scheduled for the Autumn. Check out the product at DockerCon in Seattle, June 19-21, stand number E12. ®

Sponsored:
Global IT security risks report

What’s DriveScale up to? Mix-‘n’-match server and disk storage, for starters

The content below is taken from the original (What’s DriveScale up to? Mix-‘n’-match server and disk storage, for starters), to continue reading please visit the site. Remember to respect the Author & Copyright.

Backgrounder DriveScale is a startup that emerged from a three-year stealth effort earlier this year with hardware and software to dynamically present externally connected JBODS to servers as if they were local. The idea is to provide composable server-storage combos for changing Hadoop-type distributed workloads.

It is meant to share the characteristics of hyper-scale data centres without involving that degree of scale and DIY activity by enterprises. DriveScale says existing server and storage product designs are too rigid for distributed Hadoop-style workloads with varying degrees of under-and over-provisioning of server and storage resources.

Servers and storage should be managed as seperate resource pools. As the company has sterling credibility based on its exec staff’s combined Sun and Cisco experience it is worth a look.

Technology

The software composes systems: that is servers with local (in the rack) disk storage, which can be decomposed when a Big Data Hadoop workload changes, and then composed afresh in a new configuration.

How it is done is by DriveScale software virtualising storage in a rack of disk drive enclosures and presenting it across 10GbitE links to servers in the rack. There is a top-of-rack Ethernet switch and the servers and SAS JBOD storage enclosures in the rack have Ethernet ports. DriveScale adapters (virtualisation cards), which use SAS, are interposed between the server and storage Ethernet ports.

The adapters feature an Ethernet controller (2 x 10GbitE) and a SAS (2 x 12Gbit/s 4-lane) controller. A 1U DriveScale appliance chassis holds four of these adapter cards and it links to the TOR Ethernet switch. The adapter enclosure has 80Gbit/s of bandwidth. Server access to disk pas over this Ethernet and DtiveScale system. Obviously the Ethernet and SAS links contribute some latency to the disk accesses, around 200 microseconds.

The servers run Linux and have a DriveScale software agent. DriveScale has a management facility which is used to compose (configure or allocate) storage to one or more servers, and to configure the software agents.

The agents presents storage, that has been composed for a server, as local (directly attached) to that server, using the DriveScale server agent and adapters to hide the fact that it is actually externally connected over Ethernet.

There is a SaaS overall management facility. Users control DriveScale with a GUI or via RESTful API. The JBODS could contain SSDs in the future.

Founding, founders, and funding

DriveScale received seed funding of $3m when it was founded in 2013 by chief scientist Tom Lyon and CTO Satya Nishtala. Lyon was an early Sun employee and he and Nishtala worked for Banman there. They were founders of the Nuovo spinout which developed Cisco’s UCS server technology. At Sun Lyon worked on Sparc processor design and SunOS while Nishtala was involved with Sun storage, UltraSparc workgroup servers and workstation products.

The CEO is Gene Banman and Duane Northcutt, an ex-Sun hardware architect, is VP Engineering. Ex-Sun CEO and co-founder Scott McNealy and Java man James Gosling are its advisers.

The company had a $15m A-round earlier this year, led by Pelion Venture Partners with participation from Nautilus Venture Partners and Foxconn’s Ingrasys. Ingrasys is a wholly owned Foxconn subsidiary, and helped to develop, and manufactures, the DriveScale appliance.

In summary. DriveScale says the time is right for a disaggregated design. Its technology can enable businesses to have a scale-out architecture as seen in web-scale businesses (Amazon, Facebook, Google, etc), giving them for more flexible scaling of server and storage resources. ®

Sponsored:
Fighting known, unknown, and advanced threats

The 10 most powerful supercomputers in the world

The content below is taken from the original (The 10 most powerful supercomputers in the world), to continue reading please visit the site. Remember to respect the Author & Copyright.

Looking like the world’s most important and uncomfortable furniture…
top fastest supercomputers 1

It’s the six-month anniversary of the last list, which means it’s time for a new one. Terrible shelf-life, these supercomputer lists, but that means there’s a whole new hierarchy of unfathomably powerful computing machines ranked by Top500.org for our ooh-ing and aah-ing pleasure. Here’s a look at the top 10.

To read this article in full or to leave a comment, please click here

UK citizens trust EU countries with their data more than the UK

The content below is taken from the original (UK citizens trust EU countries with their data more than the UK), to continue reading please visit the site. Remember to respect the Author & Copyright.

EuropeWith the countdown to Brexit vote in its final days, research from Blue Coat has highlighted British respondents would be more trusting if their data was stored in the EU country as opposed to the UK.

Although only marginal, 40% of respondents believe the EU is a safer bet for storage of data, whereas only 38% elected the UK. Germany was perceived as the most trustworthy state, though this could be seen as unsurprising as the country is generally viewed as having the most stringent data protection laws. France ranked in second place, whereas the UK sat in third.

While the true impact of Brexit will only be known following the vote, the role of the UK in the technology world could be impacted by the decision. The research showed a notable favouritism to store data in countries which are part of the EU and under the influence of the European Commission’s General Data Protection Regulation. When looking across the Atlantic to the US, within the UK has more trust than the rest of Europe, though it could still be considered very low. In the UK, 13% said they would trust the US with their data, whereas this number drops down to 3% where France and Germany are concerned.

“The EU regulatory landscape is set to radically change with the introduction of the GDPR legislation and this research highlights the level of distrust in countries outside the EU,” Robert Arandjelovic, Director of Product Marketing EMEA, Blue Coat Systems. “Respondents prefer to keep their data within the EU, supporting new European data protection legislation.

“More concerning is the fact that almost half of respondents would trust any country to store their data, indicating too many employees simply doesn’t pay enough attention to where their work data is held. This presents a risk to enterprises, even if their employees treat where it is being hosted with little interest.”

While the impact of the Brexit vote is entirely theoretical at the moment, leaving the union could spell difficult times for the UK as EU countries favour those which are in the EU. What is apparent from the statistics is the US still has substantial work to do to counter the ill effects of the Safe Harbour agreement, which was struck down last October. The survey indicates the replacement policy, the EU-US Privacy Shield, has not met the requirements of EU citizens as trust in the US is still low.

UK citizens trust EU countries with their data more than the UK

The content below is taken from the original (UK citizens trust EU countries with their data more than the UK), to continue reading please visit the site. Remember to respect the Author & Copyright.

EuropeWith the countdown to Brexit vote in its final days, research from Blue Coat has highlighted British respondents would be more trusting if their data was stored in the EU country as opposed to the UK.

Although only marginal, 40% of respondents believe the EU is a safer bet for storage of data, whereas only 38% elected the UK. Germany was perceived as the most trustworthy state, though this could be seen as unsurprising as the country is generally viewed as having the most stringent data protection laws. France ranked in second place, whereas the UK sat in third.

While the true impact of Brexit will only be known following the vote, the role of the UK in the technology world could be impacted by the decision. The research showed a notable favouritism to store data in countries which are part of the EU and under the influence of the European Commission’s General Data Protection Regulation. When looking across the Atlantic to the US, within the UK has more trust than the rest of Europe, though it could still be considered very low. In the UK, 13% said they would trust the US with their data, whereas this number drops down to 3% where France and Germany are concerned.

“The EU regulatory landscape is set to radically change with the introduction of the GDPR legislation and this research highlights the level of distrust in countries outside the EU,” Robert Arandjelovic, Director of Product Marketing EMEA, Blue Coat Systems. “Respondents prefer to keep their data within the EU, supporting new European data protection legislation.

“More concerning is the fact that almost half of respondents would trust any country to store their data, indicating too many employees simply doesn’t pay enough attention to where their work data is held. This presents a risk to enterprises, even if their employees treat where it is being hosted with little interest.”

While the impact of the Brexit vote is entirely theoretical at the moment, leaving the union could spell difficult times for the UK as EU countries favour those which are in the EU. What is apparent from the statistics is the US still has substantial work to do to counter the ill effects of the Safe Harbour agreement, which was struck down last October. The survey indicates the replacement policy, the EU-US Privacy Shield, has not met the requirements of EU citizens as trust in the US is still low.

OpenStack Developer Mailing List Digest May 14 to June 17

The content below is taken from the original (OpenStack Developer Mailing List Digest May 14 to June 17), to continue reading please visit the site. Remember to respect the Author & Copyright.

SuccessBot Says

  • Qiming: Senlin has completed API migration from WADL.
  • Mugsie: Kiall Fixed the gate – development can now continue!!!
  • notmyname: exactly 6 years ago today, Swift was put into production
  • kiall: DNS API reference is live [1].
  • sdague: Nova legacy v2 api code removed [2].
  • HenryG: Last remnant of oslo incubator removed from Neutron [3].
  • dstanek: I was able to perform a roundtrip between keystone and testshib.org using my new SAML2 middleware!
  • Sdague: Nova now defaults to Glance v2 for image operations [4].
  • Ajaeger: First Project Specific Install Guide is published – congrats to the heat team!
  • Jeblair: There is no Jenkins, only Zuul.
  • All

Require A Level Playing Field for OpenStack Projects

  • Thierry Carrez proposes a new requirement [5] for OpenStack “official” projects.
  • An important characteristic of open collaboration grounds is they need to be a level playing field. No specific organization can be given an unfair advantage.
    • Projects that are blessed as “official” project teams need to operate in a fair manner. Otherwise they could be essentially a trojan horse for a given organization.
    • If in a given project, developers from one specific organization benefit from access specific knowledge or hardware, then the project should be rejected under the “open community” rule.
    • Projects like Cinder provide an interesting grey area, but as long as all drivers are in and there is a fully functional (and popular) open source implementation there is likely no specific organization considered as unfairly benefiting.
  • Neutron plugin targeting a specific piece of networking hardware would likely given an unfair advantage to developers of the hardware’s manufacturer (having access to hardware for testing and being able to see and make changes to its proprietary source code).
  • Open source projects that don’t meet the open community requirement can still exist in the ecosystem (developed using gerrit and openstack/* git repository, gate testing, but as an unofficial project.
  • Full thread

Add Option to Disable Some Strict Response Checking for Interoperability Testing

  • Nova introduced their API micro version change [6]
  • QA team adds strict API schema checking to Tempest to ensure no additional Nova API responses [7][8].
    • In the last year, three vendors participating in the OpenStack powered trademark program were impacted by this [9].
  • DefCore working group determines guidelines for the OpenStack powered program.
    • Includes capabilities with associated functional tests from Tempest that must pass.
    • There is a balance of future direction of development with lagging indicators of deployments and user adoption.
  • A member of the working group Chris Hoge would like to implement a temporary waiver for strict API checking requirements.
    • While this was discussed publicly in the developer community and took some time to implement. It still landed quickly, and broke several existing deployments overnight.
    • It’s not viable for downstream deployers to use older versions of Tempest that don’t have these strict response checking, due to the TC resolution passed [10] to advise DefCore to use Tempest as the single source of capability testing.
  • Proposal:
    • Short term:
      • there will be a blueprint and patch to Tempest that allows configuration of a grey-list of Nova APIs which strict response checking on additional properties will be disabled.
      • Using this code will emit a deprecation warning.
      • This will be removed 2017.01.
      • Vendors are required to submit the grey-list of APIs with additional response data that would be published to their marketplace entry.
    • Long term:
      • Vendors will be expected to work with upstream to update the API returning additional data.
      • The waiver would no longer be allowed after the release of 2017.01 guidleine.
  • Former QA PTL Matthew Treinish feels this a big step backwards.
    • Vendors who have implemented out of band extensions or injected additional things in responses believe that by doing so they’re interoperable. The API is not a place for vendor differentation.
    • Being a user of several clouds, random data in the response makes it more difficult to write code against. Which are the vendor specific responses?
  • Alternatives to not giving vendors more time in the market:
    • Having some vendors leave the the Powered program unnecessarily weakening it.
    • Force DefCore to adopt non-upstream testing, either as a fork or an independent test suite.
  • If the new enforcement policies had been applied by adding new tests to Tempest, then DefCore could have added them using it’s processes over a period of time and downstream deployers might have not had problems.
    • Instead behavior to a bunch of existing tests changed.
  • Tempest master today supports all currently supported stable branches.
    • Tags are made in the git repository support for a release is added/dropped.
      • Branchless Tempest was originally started back in Icehouse release and was implemented to enforce the API is the same across release boundaries.
  • If DefCore wants the lowest common denominator for Kilo, Liberty, and Mitaka there’s a tag for that [11]. For Juno, Kilo, Liberty the tag would be [12].
  • Full thread

There Is No Jenkins, Only Zuul

  • Since the inception of OpenStack, we have used Jenkins to perform testing and artifact building.
    • When we only had two git repositories, we have one Jenkins master and a few slaves. This was easy to maintain.
    • Things have grown significantly with 1,200 git repositories, 8,000 jobs spread across 8 Jenkins masters and 800 dynamic slave nodes.
  • Jenkins job builder [13] was created to create 8,000 jobs from a templated YAML.
  • Zuul [14] was created to drive project automation in directing our testing, running tens of thousands of jobs each day. Responding to:
    • Code reviews
    • Stacking potential changes to be tested together.
  • Zuul version 3 has major changes:
    • Easier to run jobs in multi-node environments
    • Easy to manage large number of jobs
    • Job variations
    • Support in-tree job configuration
    • Ability to define jobs using Ansible
  • While version 3 is still in development, it’s today capable of running our jobs entirely.
  • As of June 16th, we have turned off our last Jenkins master and all of our automation is being run by Zuul.
    • Jenkins job builder has contributors beyond OpenStack, and will be continued to be maintained by them.
  • Full thread

Languages vs. Scope of “OpenStack”

  • Where does OpenStack stop, and where does the wider open source community start? Two options:
    • If OpenStack is purely an “integration engine” to lower-level technologies (e.g. hypervisors, databases, block storage) the scope is limited and Python should be plenty and we don’t need to fragment our community.
    • If OpenStack is “whatever it takes to reach our mission”, then yes we need to add one language to cover lower-level/native optimization.
  • Swift PTL John Dickinson mentions defining the scope of OpenStack projects does not define the languages needed to implement them. The considerations are orthogonal.
    • OpenStack is defined as whatever is takes to fulfill the mission statement.
    • Defining “lower level” is very hard. Since the Nova API is listening to public network interfaces and coordinating with various services in a cluster, it lower level enough to consider optimizations.
  • Another approach is product-centric. “Lower-level pieces are OpenStack dependencies, rather that OpenStack itself.”
    • Not governed by the TC, and it can use any language and tool deemed necessary.
    • There are a large number of open source projects and libraries that OpenStack needs to fulfill its mission that are not “OpenStack”: Python, MySQL, KVM, Ceph, OpenvSwitch.
  • Do we want to be in the business of building data plane services that will run into Python limitations?
    • Control plane services are very unlikely to ever hit a scaling concern where rewriting in another language is needed.
  • Swift hit limitations in Python first because of the maturity of the project and they are now focused on this kind of optimization.
    • Glance (partially data plane) did hit this limit and mitigated by folks using Ceph and exposing that directly to Nova. So now Glance only cares about location and metadata. Dependencies like Ceph care about data plane.
  • The resolution for the Go programming language was discussed in previous Technical Committee meetings and was not passed [14]. John Dickinson and others do plan to carry another effort forward for Swift to have an exception for usage of the language.
  • Full thread

 

Reinstall Windows 10 using Refresh Windows Tool from Microsoft

The content below is taken from the original (Reinstall Windows 10 using Refresh Windows Tool from Microsoft), to continue reading please visit the site. Remember to respect the Author & Copyright.

Reinstall Windows 10 using Refresh Windows Tool from Microsoft

If you wish to reinstall Windows 10 to make it run like new again, you can use the newly released Refresh Windows Tool from Microsoft. This tool works on versions running the Windows 10 Anniversary Update and later.

Refresh Windows Tool

Refresh Windows Tool

Today, Windows 10 offers you easy options to Reset Windows, which is available in Settings > Update > Recovery > Reset this PC. The built-in Reset option may allow you to keep your files, but will remove all installed programs and restore Windows settings back to defaults.

This newly released standalone Refresh Windows Tool will install a clean copy of the most recent version os Windows 10 on your PC and remove the apps that were installed on it.



When you run this tool, after receiving your UAC confirmation, it will extract some files and you will see a Getting things ready screen. Next you will have to accept the License Terms.

Once you do this, the tool will download the latest copy of Windows 10 from Microsoft servers and carry out a clean install. You cannot use your own ISO which may have been stored by you locally.

You will also be offered the option to:

  1. Keep personal files only
  2. Remove the personal files, where everything will be deleted including files, settings and apps.

Incidentally, this is similar to what the Media Creation Tool does, so until some new functionalities are added to this tool, you may not find much use for it.

You can download the Refresh Windows Tool from Microsoft. I repeat, this will work on your latest Insider Builds, Windows 10 Anniversary Update and later – and it may not work on your current Windows 10 build.



Anand Khanse is the Admin of TheWindowsClub.com, an end-user Windows enthusiast, & a 10-year Microsoft MVP Awardee in Windows for the period 2006-16. Please read the entire post & the comments first, create a System Restore Point before making any changes to your system & be careful about any 3rd-party offers while installing freeware.

ContainerX’s container management platform caters to both Linux and Windows shops

The content below is taken from the original (ContainerX’s container management platform caters to both Linux and Windows shops), to continue reading please visit the site. Remember to respect the Author & Copyright.

When it comes to choosing a container management platform, you’re spoiled for choice these days. If you want an out-of-the-box multi-tenant platform that supports both Linux and Windows, though, you don’t quite have as many options.

ContainerX, which is launching out of beta today, supports both Docker and Windows Containers (still experimental for now, as Microsoft hasn’t released the final version of Windows Server 2016 yet), as well as private and public cloud solutions.

ContainerX CEO and co-founder Kiran Kamity tells me that at least a third of the companies that reached out to his company during the beta were specifically interested in its products because they want to move their legacy .NET applications to a container platform. Another third was mostly interested in the platform’s multi-tenancy support, and the rest were looking for a turn-key container management service.

It’s no surprise Kamity is interested in the Windows market for containers. His first startup, RingCube, brought containers to Windows long before anybody talked about Docker (RingCube was later acquired by Cisco).

  1. unnamed (15)

  2. unnamed (16)

  3. unnamed (17)

Kamity noted that there are lots of table stakes when it comes to container management solutions. Windows support is one way for ContainerX to differentiate itself from the competition. Another is the company’s solid integration with VMware’s tools (partly driven by the fact that some of the team members previously worked for VMware) and ContainerX’s elastic cluster and container pools. While most container management services can obviously scale up and down as needed, ContainerX argues that its implementation provides enterprises with a more resilient service that is also able to more effectively isolate different users from each other.

One of ContainerX’s first service provider clients is Advantage24, which runs several data centers in Tokyo.

“We have evaluated most of the container management platforms out there, over the course of the last eighteen months and, after quite a bit of research, chose ContainerX,” said Terry Warren, President and co-founder of Advantage24. “Their multi-tenancy features such as Container Pools, support for bare-metal platforms as a first-class citizen, coupled with their turnkey user experience, makes them ideal for any enterprise or service provider looking for a complete container management solution.”

Now that ContainerX is out of beta, the service offers three pricing tiers: there is a free tier with support for up to 100 logical cores, as well as a gold plan that starts at $25,000 per year for small and medium enterprises and a high-end plan for large enterprises and service providers that starts at $75,000 and includes support for chargebacks, among other things.

Featured Image: Flickr UNDER A CC BY 2.0 LICENSE

Backup cloud assets on Azure Premium Storage virtual machines

The content below is taken from the original (Backup cloud assets on Azure Premium Storage virtual machines), to continue reading please visit the site. Remember to respect the Author & Copyright.

We are excited to announce the general availability of backup for premium storage virtual machines using Azure Backup to protect enterprise applications such as Oracle DB, Cassandra and SAP running on Azure IaaS. It is available in all regions where premium storage is offered

Backup can be enabled for both classic as well as ARM (Azure Resource Manager) virtual machines with premium disks. Restore process provides the flexibility to restore the data to either premium or standard storage accounts. If the restored virtual machine needs the same performance characteristics as premium storage, it can be restored to a premium storage account; for all other use cases, it can be restored to VMs for standard storage, which is cheaper.
 

One of the reasons customers deploy premium storage for virtual machines is to satisfy the IOPs requirements of enterprise critical workloads. Backup data is initially copied to the customer storage account as a staging area before it is copied to the backup vault. This is to minimize the impact of IOPS on the production workload while efficiently transferring incremental changes to the backup vault. Once the backup data is copied to the vault, the staging area is cleaned up.

If you are an existing Azure backup customer, start configuring the backup on Azure portal.

Related links and additional content:

HPE shows off a computer intended to emulate the human brain

The content below is taken from the original (HPE shows off a computer intended to emulate the human brain), to continue reading please visit the site. Remember to respect the Author & Copyright.

Intelligent computers that can make decisions like humans may someday be on Hewlett Packard Enterprise’s product roadmap.

The company has been showing off a prototype computer designed to emulate the way the brain makes calculations. It’s based on a new architecture that could define how future computers work.

The brain can be seen as an extremely power-efficient biological computer. Brains take in a lot of data related to sights, sounds and smell, which they have to process in parallel without lagging, in terms of computation speed.

HPE’s ultimate goal is to create computer chips that can compute quickly and make decisions based on probabilities and associations, much like how the brain operates. The chips will use learning models and algorithms to deliver approximate results that can be used in decision-making.

To read this article in full or to leave a comment, please click here

Olli is an IBM Watson-powered driverless electric bus

The content below is taken from the original (Olli is an IBM Watson-powered driverless electric bus), to continue reading please visit the site. Remember to respect the Author & Copyright.

You might see a cute, driverless bus roaming the streets of Washington DC starting today. It’s called Olli, and it’s an autonomous electric minibus designed by Local Motors, which you might remember as the company that’s planning to sell 3D-printed cars this year. While the automaker itself designed the 12-seater’s self-driving system, it teamed up with IBM to use Watson’s capabilities to power the EV’s other features. Thanks to Watson, you can tell Olli where you’re heading in natural language ("I’d like to go to [workplace.]") and ask it questions about how the technology works. Best of all, it won’t kick you out even if you keep asking "Are we there yet?"

Olli will be exclusive to DC these next few months, but Miami and Las Vegas will get their own in late 2016. Local Motors is also in talks to test the bus in cities outside the US, including Berlin, Copenhagen and Canberra. It’s unclear if anyone can get the chance to ride one, since these are merely trial runs, but you can ask local authorities if the EV makes its way to your city.

If and when the time comes that driverless public vehicles can legally shuttle passengers, you’ll be able to summon an Olli through an app, just like Uber. And if Local Motors’ plans pan out, a lot of people around the globe will be using that app: Company co-founder John Rogers envisions building hundreds of micro-factories all over the world that can 3D print an Olli within 10 hours and assemble it one.

Source: IBM, PhysOrg, Local Motors

Keep your sheet music organized with Gvido E Ink reader

The content below is taken from the original (Keep your sheet music organized with Gvido E Ink reader), to continue reading please visit the site. Remember to respect the Author & Copyright.

Sheet music can be difficult to corral if you don’t have some sort of system. Take it from a high school saxophone player with a chronic case of disorganization. That’s where the gorgeous E Ink device known as the Gvido comes in.

Created by Japanese firm Terrada Music, the elegant and slim (650 grams, about half of what your MacBook Air weighs) Gvido is comprised of two 13.3-inch E Ink displays with 8GB of internal memory and a microSD card slot.

You don’t have to make a big production out of turning pages like you do with sheet music, instead utilizing a touch panel on the device to do so with ease. It’s also compatible with Wacom pens in case you want to do any special notation on the music you’ve got in front of you.

The Gvido’s price doesn’t seem to have been announced yet, but it’s an interesting innovation for musicians and those looking for an alternative to paper. You can see it in action in the promotional video below.

Via: The Verge

Thursdays with Corey Sanders – the new ALIAS feature of AzureCLI

For a guide on adopting Azure for IT leaders, click: http://bit.ly/1YsUzIs. Corey Sanders, Director of Program Management on the Microsoft Azure Compute talks about the latest release (version 10.0) of the AzureCLI tool.

Google Fonts’ Updated Website Makes It Easy to Find a Good-Looking Font

The content below is taken from the original (Google Fonts’ Updated Website Makes It Easy to Find a Good-Looking Font), to continue reading please visit the site. Remember to respect the Author & Copyright.

Google has had a collection of free fonts for a while. Recently, the site for these fonts got an update. Now it’s easier than ever to browse through the fonts and preview the ones you need before you download them.

The site shows you preview sentences rendered in the various fonts included in Google’s 800+ font families. You can enter your own text so you can see what your designs look like specifically. This is similar to how font site Dafont works. You can also tweak settings like font size and which font out of the family you want to render the text in. Best of all, the entire font collection is free to use, so the next time you need a font for your project, check it out.

Google Fonts

Introducing Project Bletchley and elements of blockchain born in the Microsoft Cloud

The content below is taken from the original (Introducing Project Bletchley and elements of blockchain born in the Microsoft Cloud), to continue reading please visit the site. Remember to respect the Author & Copyright.

Since launching Microsoft Azure Blockchain as a Service (BaaS) last November, we’ve been working side-by-side with businesses and partners to understand core industry scenarios, and to develop the technologies and ecosystem that will bring blockchain to enterprises, governments and people successfully.

We’ve learned a lot about essential platform principles, features and capabilities that will enable enterprises to adopt blockchain. To address this, we’re introducing Project Bletchley, which outlines Microsoft’s vision for an open, modular blockchain fabric powered by Azure, and highlights new elements we believe are key in enterprise blockchain architecture.

Project Bletchley addresses common themes we’re hearing from early adopters of blockchain across industries, including:

  • Platform openness is a requirement.
  • Features like identity, key management, privacy, security, operations management and interoperability need to be integrated.
  • Performance, scale, support and stability are crucial.
  • Consortium blockchains, which are members-only, permissioned networks for consortium members to execute contracts, are ideal.

In Project Bletchley, Azure provides the fabric for blockchain, serving as the cloud platform where distributed applications are built and delivered. Microsoft Azure’s availability in 24 regions across the globe, hybrid cloud capabilities, extensive compliance certification portfolio, and enterprise-grade security enable blockchain adoption, especially in highly regulated industries like financial services, healthcare and government.

Azure will be open to a variety of blockchain protocols, supporting simple, Unspent Transaction Output-based protocols (UTXO) like Hyperledger, more sophisticated, Smart Contract-based protocols like Ethereum, and others as developed.

Introduced in Project Bletchley are two new concepts: blockchain middleware and cryptlets.

Blockchain middleware will provide core services functioning in the cloud, like identity and operations management, in addition to data and intelligence services like analytics and machine learning. These technologies will ensure the secure, immutable operation that blockchain provides, at the same time, deliver the business intelligence and reporting capabilities business leaders and regulators demand. Newly developed middleware will work in tandem with existing Azure services, like Active Directory and Key Vault, and other blockchain ecosystem technologies, to deliver a holistic platform and set of solutions.

Cryptlets, a new building block of blockchain technology, will enable secure interoperation and communication between Microsoft Azure, ecosystem middleware and customer technologies. Cryptlets function when additional information is needed to execute a transaction or contract, such and date and time. They will become a critical component of sophisticated blockchain systems, enabling all technology to work together in a secure, scalable way.

Project Bletchley is a vision for Microsoft to deliver Blockchain as a Service (BaaS) that is open and flexible for all platforms, partners and customers. We’re thrilled to be on this journey with the blockchain community, and are looking forward to helping transform the way we think about and do business today.

Check out the details of Project Bletchley in this whitepaper.

The Car Hacker’s Handbook digs into automotive data security

The content below is taken from the original (The Car Hacker’s Handbook digs into automotive data security), to continue reading please visit the site. Remember to respect the Author & Copyright.

In the coming age of autonomous cars, connected cars, and cars that can communicate with each other, the city’s infrastructure, our phones, and the entire internet of things, data security is going to be paramount. That’s why Craig Smith, who has spent 20 years working in banking and healthcare digital security, wrote The Car Hacker’s Handbook: A Guide for the Penetration Tester. Which is just as intimidating as it sounds.

Smith first published a version of the book in 2014 as a companion to a one-day class on car hacking. He offered it for free online, and it was downloaded 300,000 times in the first four days. There was a larger interest in this subject than he realized in teaching one-day classes on the subject at Virginia Tech and the US Cyber Challenge. And his ISP shut down his website.

When he started the OpenGarages.org online community, Smith figured it would be a bunch of security professionals who showed up. That was not the case. “It was a bunch of mechanics and performance tuners,” he said. “I was the only security person. It was a nice expansion, but it shows there’s a much bigger issue here.” When owners and mechanics are locked out of the data, they’re locked out of how their own cars work in a way people weren’t before vehicles became computers on wheels. And with data being so important to our driving experience, Smith asks, “Who owns the vehicle? After I pay $30,000 or more for a car, do I own it, or does the manufacturer?”

Not that every car owner needs to know how to hack or secure their own vehicle. “The expectation is that the manufacturer has done proper security tests,” Smith said. “But you need some method for third party review.” He brought up the fact that Volkswagen was betting on the fact that no one could check the data in its diesel-powered vehicles during emissions tests. “When you have more independent review, whether it’s a mechanic or the owner, things come to light quicker,” Smith believes.

At nearly 300 pages, The Car Hacker’s Handbook covers a lot of potential security risks, and as autonomous systems become more ubiquitous and sophisticated, there could be even more risks. So is Smith worried about the potential for bad guys to take over our cars? “The car has multiple sensors, and they don’t trust each other always,” Smith said. “The design architecture of sensors is hard to hack; it’s hard to fool senses and sensors. Unless I can get to the core, decision-making piece, I would have to fake out every sensor. You’d think they would be easier to hack, but self-driving cars don’t have a trusted space for data the same way that a corporation that keeps its data behind a firewall would.”

The worst-case scenario for Smith isn’t the remote-driving takeover hack demonstrated last summer. “Unless they’re a sociopath, a hacker doesn’t want to drive the car,” he said. “It’s not that useful. The real value is in stealing data. Information is more valuable than physical damage.”

Does this leave us with the choice of never driving again or reverting to a vintage Model T to keep our data safe? “Being a security guy, I’m pessimistic and extra paranoid,” he said. “There’s been a lot of change in the past five years, but [the automotive industry is] an old industry. We’re ahead of malicious activity, but I don’t know how easy it will be to fix legacy systems.” The pessimistic, paranoid security expert leaves us with this ray of hope: “I don’t think we’re in a bad spot.”

Featured Image: Kristen Hall-Geisler

BeagleBone board gets long-overdue Wi-Fi and Bluetooth capabilities

The content below is taken from the original (BeagleBone board gets long-overdue Wi-Fi and Bluetooth capabilities), to continue reading please visit the site. Remember to respect the Author & Copyright.

Before Raspberry Pi rocked the world of makers, boards from BeagleBoard.org were the computers of choice among developers who were looking to create cool gadgets.

One of its boards, BeagleBone, isn’t as popular as it used to be, but it still has a loyal following. Seeed Studios has taken a version of the open-source board and given it a much-needed wireless upgrade, adding Wi-Fi and Bluetooth support.

The BeagleBone Green Wireless is a significant improvement over predecessors: among other things it now allows makers to add wireless capabilities to smart home devices, wearables, health monitors and other gadgets. The upgrade also brings BeagleBone into the Internet of Things era, in which wirelessly interconnected devices are constantly exchanging data.

To read this article in full or to leave a comment, please click here

Technical Committee Highlights June 13, 2016

The content below is taken from the original (Technical Committee Highlights June 13, 2016), to continue reading please visit the site. Remember to respect the Author & Copyright.

It has been a while since our last highlight post so this one is full of updates.

New release tag: cycle-trailing

This is a new addition to the set of tags describing the release models. This tag aims to allow specific projects for doing releases after the OpenStack release has been cut. This tag is useful for projects that need to wait for the “final” OpenStack release to be out. Some examples of these projects are Kolla, TripleO, Ansible, etc.

Reorganizing cross-project work coordination

The cross project team is the reference team when it comes to reviewing cross project specs. This resolution grants the cross project team approval rights on cross-project specs and therefore the ability to merge such specs without the Technical Committee’s intervention. This is a great step forward on the TC’s mission of enabling the community to be as autonomous as possible. This resolution recognizes reviewers of openstack-specs as a team.

Project additions and removals

– Addition of OpenStack Salt: This project brings in SaltStack formulas for installing and operating OpenStack cloud deployments. The main focus of the project is to setup development, testing, and production OpenStack deployments in easy, scalable and predictable way.

– Addition of OpenStack Vitrage: This project aims to organize, analyze and visualize OpenStack alarms & events, yield insights regarding the root cause of problems and deduce their existence before they are directly detected.

– Addition of OpenStack Watcher: Watcher’s goal is to provide a flexible and scalable resource optimization service for multi-tenant OpenStack-based clouds.

– Removal of OpenStack Cue: Cue’s project team activity has dropped below what is expected of an official OpenStack project team. It was therefore removed from the list of official projects.

Recommendation on location of tests for DefCore verification

A new resolution has been merged in which it’s recommended for the DefCore team to use Tempest’s repository as the central repository for verification tests. During the summit, 2 different options were discussed as possible recommendations:

  • Use the tests within the Tempest git repository by themselves.
  • Add to those Tempest tests by allowing projects to host tests in their tree using Tempest’s plugin feature.

By recommending using the Tempest repository, the community will favor centralization of these tests, it’ll improve the collaboration on DefCore matters and it’ll also improve the consistency across the tests used for API verification.

Mission Statements Updates

On one hand Magnum has narrowed its mission statement after discussing it at the Austin summit. The team has decided Magnum should focus on managing container orchestration engines (COE) rather than managing containers lifecycle as well. On the other hand, Kuryr has expanded its mission statement to also include management of storage abstractions for containers.

Expanding technology choices in OpenStack projects

On the face of it, the request sounds simple. “Can we use golang in OpenStack?” asked of the TC in this governance patch review.

It’s a yes or no question. It sets us up for black and white definitions, even though the cascading ramifications are many for either answer.

Yes means less expertise sharing between projects as well as some isolation. Our hope is that certain technology decisions are made in the best interest of our community and The OpenStack way. We would trust projects to have a plan for all the operators and users who are affected by a technology choice. A Yes means trusting all our projects (over fifty-five currently) not to lose time by chasing the latest or doing useless rewrites, and believing that freedom of technology choice is more important than sharing common knowledge and expertise. For some, it means we are evolving and innovating as technologists.

A No vote here means that if you want to develop with another language, you should form your new language community outside of the OpenStack one. Even with a No vote, projects can still use our development infrastructure such as Mailing Lists, Gerrit, Zuul, and so on. A No vote on a language choice means that team’s deliverable is simply outside of the Technical Committee governance oversight, and not handled by our cross-project teams such as release, doc, quality. For the good of your user base, you should ensure all the technology ramifications that a yes vote would, but your team doesn’t need to work under TC oversight.

What about getting from No to Yes? Could it mean that we would like you to remain in the OpenStack community but please plugin parts that are not considering the entire community while being built.

We’ve discussed additional grey area answers. Here is the spectrum:

  • Yes, without limits.
  • Yes, but within limits outlined in our governance.
  • No, remember that it’s perfectly fine to have external dependencies written in other languages.
  • No, projects that don’t work within our technical standards don’t leverage the shared resources OpenStack offers so they can work outside of OpenStack.

We have dismissed the outer edge descriptions for Yes and No. We continued to discuss the inner Yes and inner No descriptions this week, with none of the options being really satisfactory. After lots of discussion, we came around to a No answer, abandoning the patch, while seeking input for getting to yes within limits.

Basically, our answer is about focusing on what we have in common, what defines us. It is in-line with the big-tent approach of defining an “OpenStack project” as being developed by a coherent community using the OpenStack Way. It’s about sharing more things. We tolerate and even embrace difference where it is needed, but that doesn’t mean that the project has to live within the tent. It can be a friendly neighbour rather than being in and resulting in breaking the tent into smaller sub-tents.

This All-In-One System Rescue Toolkit Has Just the Right Tools to Troubleshoot Your PC

The content below is taken from the original (This All-In-One System Rescue Toolkit Has Just the Right Tools to Troubleshoot Your PC), to continue reading please visit the site. Remember to respect the Author & Copyright.

There’s no shortage of system rescue and repair discs you can download and keep handy for when your PC gives you problems, but this one, from reader Paul, is streamlined, simple, and has only a few effective tools on it (and no bloat!)

Paul, who’s a field technician (I remember those days!) sent in his rescue disc to us and explained that he’d just made it available to the public on his web site. Over at his site, he explains why he bothered in a world where there are so many discs to choose from:

There are already so many utility discs out there, I know. Many of the other discs I have used in the past tried to do way more than I wanted, with sometimes 10-20 different applications and utilities that all do the same thing. This overwhelming level of choice does not easily support the faster pace required of field service work. I also wanted to have both my bootable repair environment and Windows utilities in the same package to reduce the number of discs I had to maintain and keep on hand.

This disc started as a bunch of batch files that allowed me to work on multiple computers throughout my day and replicate the same level of quality results on each computer without having to maintain checklists on paper. Even with checklists, I would sometimes skip or miss steps that meant a variety of results when fixing PCs. Thus, an automated utility was born! I have since been using this disc in my own line of work for 99% of the problems I encounter in the field.

Just because the disc is streamlined doesn’t mean there’s a shortage of tools on it, though. You’ll have to head over to his site for the full list (and to support the project!) and for download links to burn your own or make your own bootable USB drive with all of the utilities on it. There are a few standouts though—the disc is a live CD, so you can boot to it and run things like Clonezilla, GParted, NT Password Reset, PhotoRec, Terminal, and some other utilities (even a game of solitaire you can play while waiting for other stuff to finish!)

The Windows Autorun portion of the disc contains a ton of Windows diagnostics for testing, troubleshooting, and repairing bad Windows installs or partition issues, tools to extract or re-add product keys, network testing tools and speed tests, and even some security and malware removal tools. All in all, if you have a Windows PC—especially one you built yourself—or you’re in charge of maintaining others, the disc is worth a look.

All in One – System Rescue Toolkit | Paul Bryand Vreeland

“IoT Security” is an Empty Buzzword

The content below is taken from the original (“IoT Security” is an Empty Buzzword), to continue reading please visit the site. Remember to respect the Author & Copyright.

As buzzwords go, the “Internet of Things” is pretty clever, and at the same time pretty loathsome, and both for the same reason. “IoT” can mean basically anything, so it’s a big-tent, inclusive trend. Every company, from Mattel to Fiat Chrysler, needs an IoT business strategy these days. But at the same time, “IoT” is vacuous — a name that applies to everything fails to clarify anything.

That’s a problem because “IoT Security” is everywhere in the news these days. Above and beyond the buzz, there are some truly good-hearted security professionals who are making valiant attempts to prevent what they see as a repeat of 1990s PC security fiascos. And I applaud them.

But I’m going to claim that a one-size-fits-all “IoT Security” policy is doomed to failure. OK, that’s a straw-man argument; any one-size-fits-all security policy is bound for the scrap heap. More seriously, I think that the term “IoT” is doing more harm than good by lumping entirely different devices and different connection modes together, and creating an implicit suggestion that they can all be treated similarly. “Internet of Things Security” is a thing, but the problem is that it’s everything, and that means that it’s useful for nothing.

What’s wrong with the phrase “Internet of Things” from a security perspective? Only two words: “Internet” and “Things”.

The Things

Which Things constitute the “Internet of Things” is an easy starting point. If you ask Mattel what Things they mean, they’ll tell you Hello Barbie. For Samsung, it’s your fridge. If you ask Ford, they’ll tell you it’s a car. I was at an embedded electronics trade fair a couple years ago, and there was a company that designs factory-floor robotics telling me about their IoT strategy. It gets weirder: yoga mats, toasters, tampons, sniper rifles, and aircraft.

One of these things is not like the other...
One of these things is not like the other…

If you can think up a thing that hasn’t yet been Internetted, test yourself by posting in the comments. Or better yet, seek VC funding first and then work on a prototype second. (And then start your security design after it’s in the customers’ hands.)

The point is that it’s very hard to have a decent discussion of security and the IoT without getting specific about the Things. You do not need or want to take the same precautions with a talking childs’ toy that you do with a Jeep or a Tesla. A malware firmware upgrade for one threatens your child’s privacy (no laughing matter), but a malicious upgrade for the latter threatens your life.

If there’s a cost-benefit analysis being done when connecting a Thing to the Internet, it should be made entirely differently depending on the Thing. Some categorization of the Things is necessary. Off the top of my head, I’ve seen “Industrial IoT” used as a term — in comparison to consumer IoT. That’s progress I suppose.

For security purposes, however, I think it’s reasonable to think about the Things by their capabilities and what potential hazards they bring. Devices that “merely” record data can have privacy implications, while devices that act on the physical world can hurt people physically. The autonomy of the device is important too. Something that’s always on, like an Internetted refrigerator, has more potential for abuse than something that’s used infrequently like a quadcopter hooked up to the Internet: plant a Trojan on my fridge and you can snoop on my passwords all day long, while the quad’s batteries are going down after being online just 15 minutes.

This is just a start. A serious, security-relevant taxonomy of Things is not a task for a Hackaday writer. My point is, however, that calling both toy and real cars “Things” says nothing. Pacemaker-Things aren’t comparable to toothbrush-Things.

The Internet

When you say you’ve got a lightbulb “on the Internet”, what do you really mean? Is it firewalled? If so, what ports are open? Which servers does it connect to? Are the communications encrypted? And if so, do you control the passwords, or are they built-in? Are they the same for every Thing? Just saying “we’ll put it on the Internet” is meaningless. The particulars of the connection are extremely important.

This is where the security community has spent most of its efforts so far, and there’s great work being done. The Open Web Application Security Project (OWASP) has an IoT sub-project and their checklist for testing the security of an IoT device is great, if not (possibly) exhaustive.

When you try to secure you PC, or run a server on the Internet, you have a great advantage. You probably know which ports you need to open up in your firewall, which services you need to run, and/or what destinations you’ll be talking to. Even the cheapest home routers do a fairly decent job of protecting the computers behind them, because people’s needs are pretty predictable. I don’t think my father-in-law has ever used any port other than 80. This is not the case with IoT devices.

what_people_think_they_have[Dan Miessler] gave a talk at DEFCON (YouTube) last summer introducing the OWASP IoT Project and detailing IoT devices’ attack surfaces. If you’re at all interested, it’s worth a watch. Anyone who thinks a Thing on the Internet is a single device talking to a single server is in for a surprise.

The most important point from [Dan]’s talk, for the armchair security types like me at least, is that an IoT device is an ecosystem, and that means that the bad folks have many more surfaces to attack than you might think, or wish for.
dan_miessler_talk_01Your device communicates to the server, sure, but that’s just the start. The Thing probably also has a web-based configuration interface. Whatever service it uses (in “the cloud”) has its application interface, and probably also configuration pages. Most devices also use third-party APIs for convenience, meaning your data is going to a few more destinations than you might think, often over non-standard ports. The Thing’s firmware is going to need to be updated, so that’s another very powerful point of attack. Your Thing probably also talks to an app on your cell phone. (There’s more, but you get the picture.)

If some of these sources are trusted by the Thing, you’d better hope that they are all individually secure and properly authenticated. If any part of the ecosystem is under-secure, that’s what the exploiters are going to exploit. The more Things are interwoven with other Things, or services, or apps, the more avenues there are to break all the Things.

None of this is impossible to secure — there are best practices for each step of the way. Indeed, that’s what good-minded folks like OWASP and “I Am The Cavalry” and others are trying to do. Indeed, one of their greatest contributions is pointing out that the attack surface is much larger than it would be for a bank’s server, for instance. But by defining the problem so generally, they risk turning the task of securing your fitness watch into the task of securing “the Internet”. Of course, it may also be that bad in reality.

The Internet of Things: The Whatchamacallit of Thingamajiggies

(See what I mean? It’s even hard to parody “Things” because it’s already so imprecise.)

“Internet of Things” doesn’t describe much that’s useful from a security standpoint. On one hand, it includes widely varying classes of devices with correspondingly varying needs for security. On the other hand, it fails to describe or delimit the extent of the network that needs securing. Saying “Internet of Things security” adds nothing to just saying “security” except to warn the listener that they might need to be worrying about a very large class of problems, and end-users who don’t think they’re using a computer.

Maybe the term is useful elsewhere (it certainly is useful for marketing or getting money out of investors). But when I hear it in a security context, especially coming from the press or from the government, my eyes roll and my stomach turns just a little bit — maybe I should be stoked that they’re paying attention at all, but I pretty much know that they’re not going to be saying anything concrete. Figuring out what descriptive and useful terms replace “IoT” is left as an exercise to the reader, but it’s one that could have profound and focusing effects on the field.

Death to “the Internet of Things”! Long live “network-connected critical health-monitoring devices” and “cars with WiFi connections”.

Filed under: Featured, Interest, rants, slider

7 ways to make your IoT-connected Raspberry Pi smarter

The content below is taken from the original (7 ways to make your IoT-connected Raspberry Pi smarter), to continue reading please visit the site. Remember to respect the Author & Copyright.

Raspberry Pi becomes more powerful
iot connected raspberry pi intro

With the explosion of interest in building Internet of Things (IoT) devices based on boards like the Raspberry Pi comes an explosion of tools that make creating RPi-based IoT systems not only easier, but also more powerful. I’ve hand-picked some of the latest, greatest and coolest tools that will make your Raspberry Pi IoT project killer. (And if you’re contemplating your operating systems choices, make sure you check out my Ultimate Guide to Raspberry Pi Operating Systems, Part 1, Part 2, and Part 3 — 58 choices in total!)

To read this article in full or to leave a comment, please click here

How to turn the cloud into a competitive advantage with a scorecard approach to migration

The content below is taken from the original (How to turn the cloud into a competitive advantage with a scorecard approach to migration), to continue reading please visit the site. Remember to respect the Author & Copyright.

Closeup on eyeglasses with focused and blurred landscape view.We have seen enterprise cloud evolve a lot in recent years, going from specific workloads running in the cloud to businesses looking at a cloud-first approach for many applications and processes. This rise was also reflected in the Verizon State of the Market: Enterprise Cloud 2016 report, which found that 84% of enterprises have seen their use of cloud increase in the past year, with 87% of these now using cloud for at least one mission-critical workload. Furthermore, 69% of businesses say that cloud has enabled them to significantly reengineer one or more business processes, giving a clear sign of the fundamental impact that cloud is having on the way we do business.

These findings give a clear sign that whilst companies will continue to leverage the cloud for niche applications, enterprises are now looking to put more business-centric applications in the cloud. This approach requires designing cloud-based applications that specifically fit each workload — taking into account geography, security, networking, service management expectations and the ability to quickly deploy the solution to meet rapidly changing business requirements. As a result, a core focus for 2016 will be the creation of individual cloud spaces that correspond to the individual needs of a given workload.

The key to cloud is collaboration

This focused alignment has led to the role of enterprise IT evolving to that of a cloud broker that must collaborate with lines of business to ensure overall success of the organisation. By using an actionable, scorecard approach for aligning cloud solutions with the needs of each workload, enterprises can make more informed assessments on how best to support applications in the cloud.

Three practical steps are as follows:

  1. Consult the Business and Assess User Requirements: IT professionals should build a relationship with their organisation’s lines of business to accurately identify critical application requirements to create the right cloud solution. Some questions to ask include:
  • What are all the barriers for successful application migration?
  • What is the importance of the application’s availability and what is the cost of downtime?
  • What regulations does the application and data need to comply with?
  • How often will IT need to upgrade the application to maintain competitive advantage?
  1. Score Applications and Build a Risk Profile: The careful assessment of technical requirements of applications can mean the difference between a successful cloud migration and a failed one. A checklist to guide IT departments away from major pitfalls is important. Such as:
  • Determine the load on the network
  • Factor in time to prepare the application
  • Carefully consider the costs of moving

In addition to assessing the technical requirements, IT professionals must evaluate the applications’ risk profile. Using data discovery tools to look at the data flow is instrumental to detecting breaches and mitigating any impact.

  1. Match Requirements to the Right Cloud Service Model: Choosing the best cloud model for enterprise IT requires a thorough comprehension of technical specifications and workload requirements. The following are key considerations to help IT directors partner with their business unit colleagues to define enterprise needs and determine the right cloud model.
  • Does the application’s risk profile allow it to run on shared infrastructure?
  • What proportion of the application and its data are currently based on your premises, and how much is based with a provider?
  • How much of the management of the cloud can you take on?

Cloud is empowering IT professionals to gain a greater role in effectively impacting business results. Working in the right cloud environment allows for operational efficiency, increased performance, stringent security measures and robust network connectivity.

What’s on the horizon for cloud?

In the coming months and years, we will see an increased focus on the fundamental technology elements that enable the Internet of Things – cloud network and security. Networking and cloud computing are at the heart of IoT, comprising half of the key ingredients that make IoT possible. (Security and infrastructure are the other two.) This is not surprising considering IoT needs reliable, flexible network connections (both wireless and wireline) to move all the collected data and information from devices back to a central processing hub, without the need for human intervention. Similarly, cloud computing provides the flexibility, scale and security to host applications and store data.

Going forward, success will not be measured by merely moving to the cloud. Success will be measured by combining favourable financials and user impact with enhanced collaboration and information sharing across a business’ entire ecosystem. Those IT departments that embrace the cloud through the creation and implementation of a comprehensive strategy — that includes strong and measurable metrics and a strong focus on managing business outcomes — will be the ones we talk about as pioneers in the years to come.

Written by Gavan Egan, Managing Director of Cloud Services at Verizon Enterprise Solutions

How to turn the cloud into a competitive advantage with a scorecard approach to migration

The content below is taken from the original (How to turn the cloud into a competitive advantage with a scorecard approach to migration), to continue reading please visit the site. Remember to respect the Author & Copyright.

Closeup on eyeglasses with focused and blurred landscape view.We have seen enterprise cloud evolve a lot in recent years, going from specific workloads running in the cloud to businesses looking at a cloud-first approach for many applications and processes. This rise was also reflected in the Verizon State of the Market: Enterprise Cloud 2016 report, which found that 84% of enterprises have seen their use of cloud increase in the past year, with 87% of these now using cloud for at least one mission-critical workload. Furthermore, 69% of businesses say that cloud has enabled them to significantly reengineer one or more business processes, giving a clear sign of the fundamental impact that cloud is having on the way we do business.

These findings give a clear sign that whilst companies will continue to leverage the cloud for niche applications, enterprises are now looking to put more business-centric applications in the cloud. This approach requires designing cloud-based applications that specifically fit each workload — taking into account geography, security, networking, service management expectations and the ability to quickly deploy the solution to meet rapidly changing business requirements. As a result, a core focus for 2016 will be the creation of individual cloud spaces that correspond to the individual needs of a given workload.

The key to cloud is collaboration

This focused alignment has led to the role of enterprise IT evolving to that of a cloud broker that must collaborate with lines of business to ensure overall success of the organisation. By using an actionable, scorecard approach for aligning cloud solutions with the needs of each workload, enterprises can make more informed assessments on how best to support applications in the cloud.

Three practical steps are as follows:

  1. Consult the Business and Assess User Requirements: IT professionals should build a relationship with their organisation’s lines of business to accurately identify critical application requirements to create the right cloud solution. Some questions to ask include:
  • What are all the barriers for successful application migration?
  • What is the importance of the application’s availability and what is the cost of downtime?
  • What regulations does the application and data need to comply with?
  • How often will IT need to upgrade the application to maintain competitive advantage?
  1. Score Applications and Build a Risk Profile: The careful assessment of technical requirements of applications can mean the difference between a successful cloud migration and a failed one. A checklist to guide IT departments away from major pitfalls is important. Such as:
  • Determine the load on the network
  • Factor in time to prepare the application
  • Carefully consider the costs of moving

In addition to assessing the technical requirements, IT professionals must evaluate the applications’ risk profile. Using data discovery tools to look at the data flow is instrumental to detecting breaches and mitigating any impact.

  1. Match Requirements to the Right Cloud Service Model: Choosing the best cloud model for enterprise IT requires a thorough comprehension of technical specifications and workload requirements. The following are key considerations to help IT directors partner with their business unit colleagues to define enterprise needs and determine the right cloud model.
  • Does the application’s risk profile allow it to run on shared infrastructure?
  • What proportion of the application and its data are currently based on your premises, and how much is based with a provider?
  • How much of the management of the cloud can you take on?

Cloud is empowering IT professionals to gain a greater role in effectively impacting business results. Working in the right cloud environment allows for operational efficiency, increased performance, stringent security measures and robust network connectivity.

What’s on the horizon for cloud?

In the coming months and years, we will see an increased focus on the fundamental technology elements that enable the Internet of Things – cloud network and security. Networking and cloud computing are at the heart of IoT, comprising half of the key ingredients that make IoT possible. (Security and infrastructure are the other two.) This is not surprising considering IoT needs reliable, flexible network connections (both wireless and wireline) to move all the collected data and information from devices back to a central processing hub, without the need for human intervention. Similarly, cloud computing provides the flexibility, scale and security to host applications and store data.

Going forward, success will not be measured by merely moving to the cloud. Success will be measured by combining favourable financials and user impact with enhanced collaboration and information sharing across a business’ entire ecosystem. Those IT departments that embrace the cloud through the creation and implementation of a comprehensive strategy — that includes strong and measurable metrics and a strong focus on managing business outcomes — will be the ones we talk about as pioneers in the years to come.

Written by Gavan Egan, Managing Director of Cloud Services at Verizon Enterprise Solutions

The Raspberry Pi Infinity+ Is A Fully Functional Huge Raspberry Pi

The content below is taken from the original (The Raspberry Pi Infinity+ Is A Fully Functional Huge Raspberry Pi), to continue reading please visit the site. Remember to respect the Author & Copyright.

It wasn’t an easy weekend for the rest of the world’s hackers and makers, that of the Bay Area Maker Faire. Open your social media accounts, and most of your acquaintances seemed to be there and having a great time, while the rest were doing the same at the Dayton Hamvention. Dreary televised sports just didn’t make up for it.

MCM Electronics had the Maker Faire booth next to that of the Raspberry Pi Foundation, and since they needed both a project to show off and a statement item to draw in the crowds, they came up with the idea of a 10x scale reproduction of a Raspberry Pi above the booth. And since it was Maker Faire this was no mere model; instead it was a fully functional Raspberry Pi with working LEDs and GPIO pins.

The project started with a nearly faithful (We see no Wi-Fi antenna!) reproduction of a Raspberry Pi 3 in Adobe Illustrator. The circuit board was a piece of MDF with a layer of foam board on top of it with paths milled out for wiring and the real Pi which would power the model, hidden under the fake processor. The LEDs were wired into place, then the Illustrator graphics were printed into vinyl which was wrapped onto the board, leaving a very two-dimensional Pi.

The integrated circuits and connectors except for the GPIO pins were made using clever joinery with more foam board, then wrapped in more printed vinyl and attached to the PCB. A Pi camera was concealed above the Broadcom logo on the processor model, to take timelapse pictures of the event. This left one more component to complete, the GPIO pins which had to be functional and connected to the pins on the real Pi concealed in the model. These were made from aluminium rods, which were connected to a bundle of wires with some soldering trickery, before being wired to the Pi via the screw terminals on a Pi EZ-Connect HAT from Alchemy Power.

Is the challenge now on for a range of compatible super-HATs to mate with this new GPIO connector standard?

We previously covered the 2012 Maker Faire exhibit that inspired this huge Pi. The Arduino Grande was as you might well guess, a huge (6x scale) fully functional Arduino. In fact, the world seems rather short of working huge-scale models of single board computers, though we have featured one or two working small-scale computer models.

Thanks [Michael K Castor] for sharing his post with us.

Filed under: computer hacks

The enterprise cloud’s missing piece: Autosizing

The content below is taken from the original (The enterprise cloud’s missing piece: Autosizing), to continue reading please visit the site. Remember to respect the Author & Copyright.

Have you moved into a public cloud lately? The first step is to choose the size of the machine instance from a standard machine configuration that has enough vCPUs and enough memory. Of course, cloud providers offer custom machine instances, so you can pick the exact right amount of vCPUs and memory.

But whether it’s a standard or a custom machine instance, enterprises simply guess at the correct size, using on-premises systems as a guide. It’s a logical approach, but it’s not realistic. You rarely run the same workloads on the same server types in the clouds. Moreover, most applications will undergo some refactoring before they end up in the cloud. It’s apples and oranges.

As a result, many enterprises overestimate the resources they need, so they waste money. Some underestimate the resources they need and, thus, suffer performance and stability problems.

Cloud providers will tell you that their standard machine instances let cloud users select the best configurations for their workloads. Clearly, that’s not true. What the public cloud providers should do is build mechanisms that automatically configure the machine for the exact right amount of resources for the workload at hand: autosizing. If a platform is running a workload, it should be able to atomically profile that workload and configure that machine for the workload’s exact needs.

Yes, cloud providers already offer autoscaling and autoprovisioning, and that’s great. But they don’t address machine sizing.

The cloud providers should be able to offer autosizing of machine instances, with a little work. We already have infrastructure as code, where the applications themselves dynamically configure the resources they need. The same concept should be applied to machine instances, so users don’t have to guess. After all, they’re not the cloud infrastructure experts — the providers are.

If customers ask, maybe it will happen.