List of CMD or Command Prompt keyboard shortcuts in Windows 10

The content below is taken from the original (List of CMD or Command Prompt keyboard shortcuts in Windows 10), to continue reading please visit the site. Remember to respect the Author & Copyright.

List of CMD or Command Prompt keyboard shortcuts in Windows 10

If you use the Command Line frequently, then here is a list of CMD or Command Prompt keyboard shortcuts in Windows 10, that will help you work quicker.

Command Prompt keyboard shortcuts

Command Prompt keyboard shortcuts

Keyboard Shortcut Action

 

 

 

 

 

 

 

 

Begin selection in block mode

 

 

Move the cursor in the direction specified

 

 

Move the cursor by one page up



 

 

Move the cursor by one page down

 

 

Move the cursor to the beginning of the buffer

 

 

Move the cursor to the end of the buffer

 

 

Move up one line in the output history

 

 

Move down one line in the output history

 

 

If the command line is empty, move the viewport to the top of the buffer. Otherwise, delete all the characters to the left of the cursor in the command line. (History navigation)

 

 

If the command line is empty, move the viewport to the command line. Otherwise, delete all the characters to the right of the cursor in the command line. (History navigation)

If you are looking for more tips to work better with CMD, these Command Prompt tips & tricks will help you get started.



Anand Khanse aka HappyAndyK is an end-user Windows enthusiast, a Microsoft MVP in Windows, since 2006, and the Admin of TheWindowsClub.com. Please read the entire post & the comments first, create a System Restore Point before making any changes to your system & be careful about any 3rd-party offers while installing freeware.

Azure continues to be the best place for Software as a Service

The content below is taken from the original (Azure continues to be the best place for Software as a Service), to continue reading please visit the site. Remember to respect the Author & Copyright.

A complete platform for SaaS developers

More and more software developers are building SaaS applications—cloud applications that serve a growing base of end customers. To efficiently and cost-effectively deliver this experience, developers need a customizable application platform, data isolation without the overhead, global distribution of data and content to end-users, integrated identity and access, and the option to easily embed business intelligence. Azure offers a unique set of fully managed Platform as a Service (PaaS) offerings, that deliver these foundational elements including: Azure App Service, Azure Service Fabric, Azure SQL Database, Azure CDN, Azure Active Directory, and Power BI Embedded. It is the only application development platform that delivers a comprehensive and integrated suite of fully managed services and is recognized as a Leader in Gartner’s Magic Quadrant for Enterprise Application Platform as a Service, Worldwide for the third consecutive year.

Today, we are excited to announce two more investments that further enrich this experience for SaaS developers: general availability of SQL Database elastic pools and a partnership with Akamai for Azure CDN. These investments add to the momentum shared at //Build 2016 and together these services give our SaaS customers even more reasons to transform their solutions by leveraging Azure as their development platform.

Azure SQL Database elastic pools

Prior to SQL Database elastic pools, developers were forced to make tradeoffs between database isolation and DevOps efficiency. Now, with the general availability of SQL Database elastic pools, in addition to the intelligent capabilities built into the service,  developers can manage few to thousands of databases as one while still maintaining data isolation. Elastic pools are an ideal solution for multitenant environments as each tenant is assigned a database and each database in the elastic pool gets the computing resources only as needed – eliminating the complexity of developing custom application code or over-provisioning and managing individual databases to isolate data. Elastic pools include auto-scaling database resources, intelligent management of the database environment with insights and recommendations, and a broad performance and price spectrum to meet various needs.

Elastic Pools

Since its preview of last year, many SaaS developers have adopted pools in their applications and are benefiting from the transition to elastic pools. One customer already benefitting from SQL Database elastic pools is GEP, a technology provider of SMART by GEP, a cloud-based procurement and supply chain solution.

“The PaaS technologies in Azure help us focus on our core product development without worrying about infrastructure. We have redesigned all our applications to run on PaaS services running in Azure. Currently we are able to quickly bring our code to market and deliver to customers across the globe." – Dhananjay Nagalkar, vice president of technology, GEP. “We’ve migrated +800 of our databases to elastic pools. Each are grouped into a mixture of standard and premium elastic pools which allow us to offer tiered performance and pricing options to our customers. Since the migration, we’ve closed two datacenters in San Jose, CA and Newark, NJ and we’re proud to say GEP is now a datacenter free company. SQL Database brings huge cost savings to us in long run, in the 2016 financial year alone, the adoption of elastic pools will save us a quarter of a million dollars.”

Azure Content Delivery Network from Akamai

With the general availability of Azure CDN from Akamai, Azure CDN is now a multi-CDN offering with services from Akamai and Verizon enabling customers to choose the right CDN for their needs, streamlining support and service with Azure. CDN’s can improve speed, performance and reliability of solutions – a foundational requirement of SaaS applications. The stakes for today’s businesses and content delivery are high. All users—whether they’re online for business, consumer, or entertainment purposes—expect uniformly fast performance and richer media content on any device. In fact, 79 percent who have trouble with website performance say they won’t return to the site to buy again (Kiss Metrics). With this release, Microsoft is offering customers flexibility and global coverage with the availability of Azure CDN from Akamai, a leader in CDN services for media, software and cloud security solutions, enabling customers to manage large media workloads in an efficient and secure way.

Since our preview in the fall, we’ve been working with a number of leading companies including LG, TVN, MEKmedia, TVB and several others in the broadcasting, production and media space. MEKmedia, a leading technology partner for smart TV apps, relies on quick, stable, secure and reliable delivery. Matthias Moritz, CEO, MEKmedia remarked,

“Azure Media Services and Azure CDN from Akamai offers a scalable, secure, and cost-effective solution for our media workflows. Enabling us to deliver great experiences to our customers.”

Transform SaaS apps with Azure Services

In addition to the data isolation and data and content distribution provided by SQL Database and CDN, additional benefits can be unlocked when leveraging additional Azure PaaS services for SaaS applications.

Azure App Service and Service Fabric

Combined with SQL Database elastic pools, App Service delivers a fully end-to-end managed app experience, something SaaS developers need to maintain sensitive margins. App Service is a one-of-a kind solution that brings together the tools customers need for building enterprise-ready apps around a common development, management and billing model. Customers can choose from a rich selection of app templates, API services and a unified set of enterprise capabilities including web and mobile backend services, turnkey connectivity to SaaS and enterprise systems, and workflow-based creation of business processes. Azure App Service frees developers to focus on delivering great business value instead of needing to worry about repetitive tasks such as stitching disparate data sources together and dealing with infrastructure management and operational overhead. This unified approach lets our customers take full advantage of the service while meeting their concerns about security, reliability, and scalability.

For customers looking to build new highly scalable multi-tenant applications, Service Fabric is a mature, feature-rich microservices application platform with built-in support for lifecycle management, stateless and stateful services, performance at scale, 24×7 availability, and cost efficiency.

Service Fabric has been in production use at Microsoft for more than five years, powering a range of Microsoft’s PaaS and SaaS offerings including SQL Databases, DocumentDB, Intune, Cortana and Skype for Business. In the largest of these, Service Fabric manages hundreds of thousands of stateful and stateless microservices across hundreds of servers. Now, we’ve taken this exact same technology and with it released Service Fabric as-a-service on Azure.

Azure Active Directory

For SaaS applications that require seamless federated identity and access, Azure Active Directory provides identity and access management capabilities by combining directory services, advanced identity governance, a rich standards-based platform for developers, and application access management. With Azure Active Directory, developers can enable single sign-on to any SaaS app developed on Azure. Azure Active Directory hosts almost 9.5 million directories from organization all over the world and 600 million user accounts that every day generate 1.3 billion authentications.

Power BI Embedded

Finally, for developers looking to transform the experience of their SaaS application, Microsoft recently introduced Power BI Embedded. Power BI Embedded allows application developers to embed stunning, fully interactive reports into customer facing apps without the time and expense of having to build controls from the ground-up. This service helps the end-user of an application seamlessly get contextual analytics within an app.  Application developers can choose from a broad range of modern data visualizations out of the box, or easily build and use custom visualizations to meet the applications’ unique functional and branding needs. Power BI Embedded offers consistent data visualization experiences on any devices – desktop or mobile.

Highspot, a SaaS vendor offering a sales enablement platform, is one early adopter of Power BI embedded.

“Using Microsoft Power BI Embedded, we were able to enhance our existing analytics abilities significantly. We easily added interactive power BI reports into the existing Highspot sales enablement platform. Power BI Embedded reports gave us rich out-of-the-box visuals, sitting side-by-side with Highspot’s built-in reports, providing sales and marketing teams with a unique 360-degree perspective on the effectiveness of their sales enablement initiatives.” – Robert Wahbe, CEO, Highspot

In summary

We’re excited about the general availability of SQL Database elastic pools and CDN from Akamai as they add even more value and choice to Microsoft Azure’s portfolio of services that help software developers transform their application development. By leveraging any of the Azure PaaS offerings, SaaS developers are free to focus on unlocking business value without the overhead associated with traditional approaches. Together these services give SaaS customers even more reasons to transform their business with Azure.

Learn more about these unique SaaS-optimized services:

How to get your ASP.NET app up on Google Cloud the easy way

The content below is taken from the original (How to get your ASP.NET app up on Google Cloud the easy way), to continue reading please visit the site. Remember to respect the Author & Copyright.

Don’t let anyone tell you that Google Cloud Platform doesn’t support a wide range of platforms and programming languages. We kicked things off with Python and Java on Google App Engine, then PHP and Go. Now, we support .NET framework on Google Compute Engine.

Google recently published a .NET client library for services like Google Cloud Datastore and Windows virtual machines running on Compute Engine. With those pieces in place, it’s now possible to run an ASP.NET application directly on Cloud Platform.

To get you up and running fast, we published two new tutorials that show you how to build and deploy ASP.NET applications to Cloud Platform.

The Hello World tutorial shows you how to deploy an ASP.NET application to Compute Engine.

The Bookshelf tutorial shows you to build an ASP.NET MVC application that uses a variety Cloud Platform services to make your application reliable, scaleable and easy to maintain. First, it shows you how to store structured data with .NET. Do you love SQL? Use Entity Framework to store structured data in Cloud SQL. Tired of connection strings and running ALTER TABLE statements? Use Cloud Datastore to store structured data. The tutorial also shows you how to store binary data and run background worker tasks.

Give the tutorials a try, and please share your feedback! And don’t think we’re done yet  this is just the beginning. Among many efforts, we’re hand-coding open source libraries so that calling Google APIs feels familiar to .NET programmers. Stay tuned for more on running ASP.NET applications on Google Cloud Platform.

Free tool aims to make it easier to find vulns in open source code

The content below is taken from the original (Free tool aims to make it easier to find vulns in open source code), to continue reading please visit the site. Remember to respect the Author & Copyright.

DevOps outfit SourceClear has released a free tool for finding vulnerabilities in open-source code.

SourceClear Open is touted as a means for developers to identify known and emerging security threats beyond those in public and government databases.

“Developers are being held more accountable for security and demanding tools that help them with that responsibility,” according to SourceClear. “But traditional security products are insufficient, and the recent closure of the Open Source Vulnerability Database (OSVDB) and the well-documented struggles of the CVE and its naming process have underscored the limitations of public and government-backed software vulnerability databases.”

SourceClear Open is based on SourceClear’s commercial products and delivered as a cloud-based service. The technology is said to track thousands of threat sources and analyses millions of open-source library releases.

The new tool is designed to allow developers to identify what open-source libraries they are using, what vulnerabilities exist, which vulnerabilities actually matter, and what needs to be done to fix them. SourceClear Open integrates with GitHub and Jenkins and supports languages such as Java, Ruby, Python and JavaScript that development teams often rely on.

SourceClear’s chief exec (and OWASP founder) Mark Curphey explains the technology and the thinking beyond it in a blog post entitled, Free Security for Open-Source Code – SourceClear Open is Now Live, here. ®

Sponsored:
The total economic impact of migrating from open source application servers to IBM WAS liberty

Real World Troubleshooting Tips for OpenStack Operations

In this session we will present troubleshooting tips such as debugging with clients, debugging logs, and a walkthrough with examples of steps in a “nova boot” command, from a team that utilizes these tips daily with their storage and operations teams.

Machine Learning, Recommendation Systems, and Data Analysis at Cloud Academy

The content below is taken from the original (Machine Learning, Recommendation Systems, and Data Analysis at Cloud Academy), to continue reading please visit the site. Remember to respect the Author & Copyright.

In today’s guest post, Alex Casalboni and Giacomo Marinangeli of Cloud Academy discuss the design and development of their new Inspire system.


Jeff;


Our Challenge
Mixing technology and content has been our mission at Cloud Academy since the very early days. We are builders and we love technology, but we also know content is king. Serving our members with the best content and creating smart technology to automate it is what kept us up at night for a long time.

Companies are always fighting for people’s time and attention and at Cloud Academy, we face those same challenges as well. Our goal is to empower people, help them learn new Cloud skills every month, but we kept asking ourselves: “How much content is enough? How can we understand our customer’s goals and help them select the best learning paths?”

With this vision in mind about six months ago we created a project called Inspire which focuses on machine learning, recommendation systems and data analysis. Inspire solves our problem on two fronts. First, we see an incredible opportunity in improving the way we serve our content to our customers. It will allow us to provide better suggestions and create dedicated learning paths based on an individual’s skills, objectives and industries. Second, Inspire represented an incredible opportunity to improve our operations. We manage content that requires constant updates across multiple platforms with a continuously growing library of new technologies.

For instance, getting a notification to train on a new EC2 scenario that you’re using in your project can really make a difference in the way you learn new skills. By collecting data across our entire product, such as when you watch a video or when you’re completing an AWS quiz, we can gather that information to feed Inspire. Day by day, it keeps personalising your experience through different channels inside our product. The end result is a unique learning experience that will follow you throughout your entire journey and enable a customized continuous training approach based on your skills, job and goals.

Inspire: Powered by AWS
Inspire is heavily based on machine learning and AI technologies, enabled by our internal team of data scientists and engineers. Technically, this involves several machine learning models, which are trained on the huge amount of collected data. Once the Inspire models are fully trained, they need to be deployed in order to serve new predictions, at scale.

Here the challenge has been designing, deploying and managing a multi-model architecture, capable of storing our datasets, automatically training, updating and A/B testing our machine learning models, and ultimately offering a user-friendly and uniform interface to our website and mobile apps (available for iPhone and Android).

From the very beginning, we decided to focus high availability and scalability. With this in mind, we designed an (almost) serverless architecture based on AWS Lambda. Every machine learning model we build is trained offline and then deployed as an independent Lambda function.

Given the current maximum execution time of 5 minutes, we still run the training phase on a separate EC2 Spot instance, which reads the dataset from our data warehouse (hosted on Amazon RDS), but we are looking forward to migrating this step to a Lambda function as well.

We are using Amazon API Gateway to manage RESTful resources and API credentials, by mapping each resource to a specific Lambda function.

The overall architecture is logically represented in the diagram below:

Both our website and mobile app can invoke Inspire with simple HTTPS calls through API Gateway. Each Lambda function logically represents a single model and aims at solving a specific problem. More in detail, each Lambda function loads its configuration by downloading the corresponding machine learning model from Amazon S3 (i.e. a serialized representation of it).

Behind the scenes, and without any impact on scalability or availability, an EC2 instance takes care of periodically updating these S3 objects, as outcome of the offline training phase.

Moreover, we want to A/B test and optimize our machine learning models: this is transparently handled in the Lambda function itself by means of SixPack, an open-source A/B testing framework which uses Redis.

Data Collection Pipeline
As far as data collection is concerned, we use Segment.com as data hub: with a single API call, it allows us to log events into multiple external integrations, such as Google Analytics, Mixpanel, etc. We also developed our own custom integration (via webhook) in order to persistently store the same data in our AWS-powered data warehouse, based on Amazon RDS.

Every event we send to Segment.com is forwarded to a Lambda function – passing through API Gateway – which takes care of storing real-time data into an SQS queue. We use this queue as a temporary buffer in order to avoid scalability and persistency problems, even during downtime or scheduled maintenance. The Lambda function also handles the authenticity of the received data thanks to a signature, uniquely provided by Segment.com.

Once raw data has been written onto the SQS queue, an elastic fleet of EC2 instances reads each individual event – hence removing it from the queue without conflicts – and writes it into our RDS data warehouse, after performing the required data transformations.

The serverless architecture we have chosen drastically reduces the costs and problems of our internal operations, besides providing high availability and scalability by default.

Our Lambda functions have a pretty constant average response time – even during load peaks – and the SQS temporary buffer makes sure we have a fairly unlimited time and storage tolerance before any data gets lost.

At the same time, our machine learning models won’t need to scale up in a vertical or distributed fashion since Lambda takes care of horizontal scaling. Currently, they have an incredibly low average response time of 1ms (or less):

We consider Inspire an enabler for everything we do from a product and content perspective, both for our customers and our operations. We’ve worked to make this the core of our technology, so that its contributions can quickly be adapted and integrated by everyone internally. In the near future, it will be able to independently make decisions for our content team while focusing on our customers’ need.  At the end of the day, Inspire really answers our team’s doubts on which content we should prioritize, what works better and exactly how much of it we need. Our ultimate goal is to improve our customer’s learning experience by making Cloud Academy smarter by building real intelligence.

Join our Webinar
If you would like to learn more about Inspire, please join our April 27th webinar – How we Use AWS for Machine Learning and Data Collection.

Alex Casalboni, Senior Software Engineer, Cloud Academy
Giacomo Marinangeli, CTO, Cloud Academy

PS – Cloud Academy is hiring – check out our open positions!

Migrating to the cloud: A practical, how-to guide

The content below is taken from the original (Migrating to the cloud: A practical, how-to guide), to continue reading please visit the site. Remember to respect the Author & Copyright.

For many companies, the cloud’s economies of scale, flexibility and predictable payment structures are becoming too attractive to ignore. Looking to avoid costly capital outlays for new servers and the high overhead of managing on-premises architecture, many companies have started moving to the cloud as a cost effective option to develop, deploy and manage their IT portfolio.

To be sure, the benefits of cloud computing go well beyond economies of scale and efficiency. Consider that the vast number of servers that help to drive down costs are also on tap to provide virtually inexhaustible levels of compute power, which can redefine the possibilities for virtually every aspect of your business.

But regardless of a company’s size, migrating to the cloud is certainly no small task. When done right, the process will cause a company to reconsider its culture, its processes, individual roles and governance—not to mention engineering.

Download the free Enterprise Cloud Strategy e-book for a road map to navigate your way.

Enterprise Cloud Strategy

For the past several years, Barry Briggs and I have been on the front lines in helping companies, including Microsoft, navigate these challenges of migrating to the cloud. We’ve seen firsthand how companies are using the cloud’s potential to transform and reinvent themselves.

For example, the sales team for Minneapolis-based 3M Parking Systems needed better insight into thousands of new technology installations of which the company had recently taken ownership following its acquisition of parking, tolling, and automatic license plate reader businesses.

In just two days’ time, 3M created a tracking solution that connects multiple types of mobile devices, thousands of machines and data sources, and a cloud platform (using Xamarin Studio, Visual Studio and Azure Mobile Services). Now the 3M sales team can immediately see where equipment is installed, allowing them to work more autonomously and productively while out in the field.

Another example is work done with a London-based financial services firm, Aviva, that wanted to create a first of its kind pricing model that would provide a personalized prices, reducing insurance premiums for appropriate customers. Historically, this would have required installing black boxes in vehicles to collect and transmit telemetry data back to the company’s data center, which also would have required increased storage and compute capacity.

A solution like this would not have penciled out, but with the help of a handful of Microsoft tools and technologies, Aviva was able to design, develop and release the Aviva Drive app in just over a year’s time. The result was a pricing model that gave customers as much as a 20 percent discount on their premiums, and provided Aviva with a significant competitive advantage.

What 3M and Aviva (and many others) have since discovered is the shifting balance between maintenance and innovation: The automation of many day-to-day responsibilities, made possible by the technologies underpinning their cloud computing platform, has freed up IT to devote more time toward creating and administering applications and services that will move the bar for the business.

Based on these findings, and those of many colleagues, Barry and I have written this e-book, Enterprise Cloud Strategy. What you’ll find is an in-depth guide to help you start your own migration, providing practical suggestions for getting started experimenting, assembling a team to drive the process and how to make the most of game-changing technologies such as advanced analytics and machine learning.

Happy moving!

Read James Staten’s Azure blog post on how the Enterprise Cloud Strategy e-book can help you prioritize for a successful hybrid cloud strategy.

Join me on Wednesday, May 11 at 10:00 am PST for the Roadmap to Build your Enterprise Cloud Strategy webinar.

Azure Web Apps Gallery available only in new Azure portal

The content below is taken from the original (Azure Web Apps Gallery available only in new Azure portal), to continue reading please visit the site. Remember to respect the Author & Copyright.

Web App Marketplace allows you to quickly deploy dynamic blogs, CMS platforms, e-commerce sites, and more, with ready-to-use Azure Web Apps and templates including hundreds of popular open source solutions by Microsoft, partners, and the community.

Azure users can create an open source solution by clicking on New -> Web -> From Gallery in the old Azure Management portal and in the new Azure portal using Web Marketplace.

In an effort to improve the web apps user experience, we will no longer support the creation of Gallery applications from the old Azure Management portal, starting June, 2016.

oldportal-gallery

When you click on New -> Web -> From Gallery on Azure Management portal, you will be informed to use the new Azure portal. You can always access the Azure portal Web Marketplace by clicking here.

image

Please share your feedback on how we can improve the Web Apps Marketplace and any new apps you would like to see in the Azure Marketplace on UserVoice .

Hybrid cloud: How you can take advantage of the best of both worlds

The content below is taken from the original (Hybrid cloud: How you can take advantage of the best of both worlds), to continue reading please visit the site. Remember to respect the Author & Copyright.

It’s a fact: the hybrid cloud has emerged as a “dominant deployment model.” Indeed, the appeal of hybrid cloud among IT professionals is now “universal.”

Why?

Cloud is here to stay, but CIO’s know that the transition to cloud computing won’t happen overnight. Responsible CIO’s and IT managers will want to ensure that their applications and data in the cloud are secure, that migrations will generate cost savings, and above all that business operations will continue uninterrupted during the migration. Your application users should not know or care if their application is hosted locally or in the cloud.

To achieve this, you should look at the cloud as a model – not a place – where, with thoughtful planning, you have the power to ensure right balance of agility and control for each application. This means you’ll need an action plan for application deployment across a mixed, hybrid environment, a plan that ensures application usage is seamless for users and you as an IT administrator have control (as covered in my last post).

A well-designed hybrid cloud provides seamless, more secure access to applications to your users – no matter where those applications are.

Getting started with your hybrid cloud

Enterprise Applications

How do you accomplish that? To start with, you need a more secure, high-speed interconnect between your data center and the applications you host in the cloud. Microsoft provides several solutions, including virtual private network (VPN) solutions as well as a dedicated line solution (Microsoft ExpressRoute). With ExpressRoute, your data center is linked to Azure via a private, low-latency connection that bypasses the public internet.

Both of these technologies enable IT to set up their DNS addressing so that applications in the cloud continue to appear as part of your local IT data center.

What about identity? You’ll want your users to access applications without having to re-enter credentials again – of course. Single sign-on (SSO), a capability provided by Azure Active Directory, is the final piece in your virtual data center. AAD allows you to synchronize identities with your on-premises Active Directory; and thus your users log on to the (virtual) network once and are transparently provided access to corporate applications without regard to their hosting location.

Even before you begin migrating applications, you can take advantage of the hybrid cloud. A cloud-based marketing application can easily and securely send leads back to an on-premises database – while fully taking advantage of the scale, mobile access, and global reach of the cloud.

Think as well about using the agility that public cloud offers in hybrid management with Azure’s integrated Operations Management Suite, including inexpensive data center-to-cloud backup, and cloud-based disaster recovery. You can also take advantage of cloud services to quickly connect your applications to external commerce systems via the industry standard X.12 Electronic Data Interchange protocol and others.

Extending your hybrid cloud to the future

Emerging technologies provide new and exciting capabilities to the hybrid cloud. Azure Stack, currently in Technical Preview, brings many of the capabilities of the public cloud to your data center. Enterprise IT will then be able to adopt a “write-once, deploy-anywhere” approach to their applications, selecting the public, private, or hosted cloud that makes sense for each application or service based on business rules and technical requirements.

In addition, new application packaging technologies – called containers – make it possible to easily burst from the on-premises data center by adding new instances in the public cloud when additional capacity is necessary.

The best of both worlds

Enterprise Cloud StrategyMicrosoft deeply believes in the importance of hybrid cloud, and in fact, as a hybrid cloud user itself, utilizes all of the technologies and approaches we’ve mentioned above in our own IT environment. With a sound hybrid cloud strategy, you can take advantage of all the exciting cloud technologies now available while preserving your investment in your data centers – and you can migrate applications and data at your own pace.

Interested in hybrid cloud? Check out Enterprise Cloud Strategy, by former Microsoft IT CTO and technology thought leader Barry Briggs and Eduardo Kassner, the executive in charge of Microsoft’s field Cloud Solutions Architects. It’s free!

Four stops on your journey to the cloud: Experiment, migrate, transform, repeat

The content below is taken from the original (Four stops on your journey to the cloud: Experiment, migrate, transform, repeat), to continue reading please visit the site. Remember to respect the Author & Copyright.

There’s a period of anticipation at the start of any journey when you’re researching your intended destination to find the right spots to stay, places to eat and things to do. For organizations migrating to the cloud, that phase is known in the industry as experimentation.

The experimentation phase is important because cloud computing is an unknown for many businesses. As I explained in my last post, developing an enterprise cloud strategy isn’t simply about finding a more affordable way to manage IT. It’s about finding ways to get more out of many facets of your business. Experimentation is useful in helping choose where to get started on the road to the cloud, and envisioning your ultimate destination.

When Microsoft was moving to the cloud, we did a little “tire-kicking” of our own. We picked a project that was non-essential to the company — a cloud-based version of an app that we built every year for our month-long auction to raise money for charities. It was a great opportunity to see the scalability of the application over time as the end of the auction drew near and usage of the app increased.

Concurrent with engineers experimenting with how to build, test and deploy apps in the cloud, business and IT departments need to envision how they can help their company leap ahead through services or applications that are broader in scope, that create agility, and that take advantage of cloud services such as machine learning, big data and streaming analytics. This exercise helps to crystallize where moving to the cloud could take the company. It can also help mentally prepare for the next phase: migration.

Migration is when the bulk of the IT portfolio is moved to the cloud. It’s also during migration that technical staff, operations, the executive team, business sponsors, security professionals, regulatory compliance staff, legal and HR must all cooperate and collaborate.

Almost simultaneous with the migration, the transformation process will begin. Some of your apps will move to the cloud as virtual machines, leaving them more or less intact. In other cases, you might opt for a PaaS deployment model, in which you build a new application from the ground up to take better advantage of capabilities such as data replication, scalability and cloud services like Microsoft Azure Active Directory, which provides robust identity management.

Enterprise Cloud Strategy

It won’t be long before you’re back for more.

One thing that we learned very early in Microsoft’s own migration to the cloud is the value of a culture of continuous experimentation. In the cloud era, success requires that businesses experiment, fail fast and learn fast. The lessons learned from your failures and successes will help your company gain greater value and create disruptive innovation.

Read more on moving to the cloud and transforming your business in Enterprise Cloud Strategy, an e-book that I co-published with Barry Briggs. You’ll find a wealth of knowledge from Microsoft and some of its customers, all intended to help you succeed in your migration to the cloud.

Consider it the travel guide for your journey to the cloud, without the fancy pictures.

Join me on Wednesday, May 11 at 10:00 am PST for the Roadmap to Build your Enterprise Cloud Strategy webinar.

3 options for applications that don’t fit the cloud

The content below is taken from the original (3 options for applications that don’t fit the cloud), to continue reading please visit the site. Remember to respect the Author & Copyright.

Not every application and data set is right for private or public cloud computing. Indeed, as much as 50 percent of today’s workloads are not a good fit for the cloud.

However, all is not lost. You have alternatives that can drive as much value as cloud computing from legacy workloads. As we continue to move the low-hanging fruit to the cloud, we need to find new and more efficient homes for the misfits. Here are three options.

1. Find a SaaS alternative

More than 2,000 SaaS companies provide everything from HR automation to automotive garage management applications. If your legacy application is not right for the cloud, consider a SaaS replacement that not only fits but exceeds expectations.

2. Use a managed service provider

If the application and data can’t move to the cloud, managed services providers (MSPs) may provide a better home, hosted on someone else’s hardware in someone else’s data center. Many of these MSPs are tightly integrated with public clouds, such as Amazon Web Services, and can provide a happy, cost-effective home for your workload, plus have those workloads work and play well with workloads in the public clouds.

3. Consider refactoring

Although refactoring means recoding, which means money and time, it sometimes makes sense to redo an application so that it’s cloud-compatible.

Do You Live in The Countryside? You Might Have to Ask the Government to Give You Broadband

The content below is taken from the original (Do You Live in The Countryside? You Might Have to Ask the Government to Give You Broadband), to continue reading please visit the site. Remember to respect the Author & Copyright.

And you could actually be waiting years for it ever to come to your doorstep.

How to Reuse Waste Heat from Data Centers Intelligently

The content below is taken from the original (How to Reuse Waste Heat from Data Centers Intelligently), to continue reading please visit the site. Remember to respect the Author & Copyright.

Data centers worldwide are energy transformation devices. They draw in raw electric power on one side, spin a few electrons around, spit out a bit of useful work, and then shed more than 98 percent of the electricity as not-so-useful low-grade heat energy. They are almost the opposite of hydroelectric dams and wind turbines, which transform kinetic energy of moving fluids into clean, cheap, highly transportable electricity to be consumed tens or hundreds of miles away.

Monroe heat reuse fig 1

Image: Energetic Consulting

But maybe data centers don’t have to be the complete opposite of generation facilities. Energy transformation is not inherently a bad thing. Cradle-to-Cradle author and thought leader William McDonough teaches companies how to think differently, so that process waste isn’t just reduced, but actively reused. This same thinking can be applied to data center design so that heat-creating operations like data centers might be paired with heat-consuming operations like district energy systems, creating a closed-loop system that has no waste.

It’s not a new idea for data centers. There are dozens of examples around the globe of data centers cooperating with businesses in the area to turn waste heat into great heat. Lots of people know about IBM in Switzerland reusing data center heat to warm a local swimming pool. In Finland, data centers by Yandex and Academica share heat with local residents, replacing the heat energy used by 500-1000 homes with data center energy that would have been vented to the atmosphere. There are heat-reuse data centers in Canada, England, even the US. Cloud computing giant Amazon has gotten great visibility from reuse of a nearby data center’s heat at the biosphere project in downtown Seattle.

Rendering of an Amazon campus currently under construction in Seattle's Denny Triangle neighborhood (Image:

Rendering of an Amazon campus currently under construction in Seattle’s Denny Triangle neighborhood

Crank Up the Temperature

There are two big issues with data center waste heat reuse: the relatively low temperatures involved and the difficulty of transporting heat. Many of the reuse applications to date have used the low-grade server exhaust heat in an application physically adjacent to the data center, such as a greenhouse or swimming pool in the building next door. This is reasonable given the relatively low temperatures of data center return air, usually between 28o and 35oC (80-95oF), and the difficulty in moving heat around. Moving heat energy frequently requires insulated ducting or plumbing instead of cheap, convenient electrical cables. Trenching and installation to run a hot water pipe from a data center to a heat user may cost as much as $600 per linear foot. Just the piping to share heat with a facility one-quarter mile away might add $750,000 or more to a data center construction project. There’s currently not much that can be done to reduce this cost.

To address the low-temperature issue, some data center operators have started using heat pumps to increase the temperature of waste heat, making the thermal energy much more valuable, and marketable. Waste heat coming out of heat pumps at temperatures in the range of 55o to 70oC (130-160oF) can be transferred to a liquid medium for easier transport and can be used in district heating, commercial laundry, industrial process heat, and many more. There are even High Temperature (HT) and Very High Temperature (VHT) heat pumps capable of moving low-grade data center heat up to 140oC.

Monroe heat reuse fig 3

Image: Energetic Consulting

The heat pumps appropriate for this type of work are highly efficient, with Coefficient of Performance (COP) of 3.0 to 6.0, and the energy used by the heat pumps gets added to the stream of energy moving to the heat user, as shown in the diagram below. If a data center is using heat pumps with a COP of 5.0, running on electricity that costs $0.10 per kWh, the energy can be moved up to higher temperatures for as little as $0.0083 per kWh.

Waste heat could be a source of income for the data center. New York’s Con Edison produces steam heat at $0.07 per kWh (€0.06 per kWh), and there have been examples of heat-and-power systems selling waste heat to district heating systems for €0.1-€0.3 per kWh. For a 1.2MW data centers that sells all of its waste heat, that could translate into more than $350,000 (€300,000) per year. That may be as much as 14% of the annual gross rental income from a data center that size, with very high profit margins.

Closing the Loop

There’s also the possibility of combining data centers with power plants for increased efficiency and reuse of waste heat. Not just in the CHP-data center sense described by Christian Mueller in this publication in February, or the purpose-built complex like The Data Centers LLC proposed in Delaware. Building data centers in close proximity to existing power plants could be beneficial in several ways. In the US, transmission losses of 8-10% are typical across the grid. Co-locating data centers right next to power plants would eliminate this loss and the capital expense of transporting large amounts of power.

Second, power plants make “dumb” electrons, general-purpose packets of energy that need to be processed by data centers to turn into “smart” electrons that are part of someone’s Facebook update screen, a weather model graphic output, or digital music streaming across the internet. Why transport the dumb electrons all the way to the data center to be converted?

Third, a co-located data center could transfer heat pump-boosted thermal energy back to the power plant for use in the feed water heater or low-pressure turbine stages, creating a neat closed-loop system.

There are important carbon footprint benefits in addition to the financial perks. Using the US national average of 1.23 lb CO2 per kWh, a 1.2MW data center could save nearly 6,000 metric tons of CO2 per year by recycling the waste heat.

These applications are starting to appear in small and large projects around the world. The key is to find an application that needs waste heat year round, use efficient, high-temperature heat pumps, and find a way to actively convert this wasted resource into revenue and carbon savings.

About the author: Mark Monroe is president at Energetic Consulting. His past endeavors include executive director of The Green Grid and CTO of DLB Associates. His 30 years’ experience in the IT industry includes data center design and operations, software development, , professional services, sales, program management, and outsourcing management. He works on sustainability advisory boards with the University of Colorado and local Colorado governments, and is a Six Sigma Master Black Belt.

RightScale Cuts Own Cloud Costs by Switching to Docker

The content below is taken from the original (RightScale Cuts Own Cloud Costs by Switching to Docker), to continue reading please visit the site. Remember to respect the Author & Copyright.

Less than two months ago, the engineering team behind the cloud management platform RackScale kicked off a project to rethink the entire infrastructure its services run on. They decided to package as much of its backend as possible in Docker containers, the method of deploying software whose popularity spiked over the last couple of years, becoming one of the most talked about technology shifts in IT.

It took the team seven weeks to complete most of the project, and Tom Miller, RightScale’s VP of engineering, declared the project a success in a blog post Tuesday, saying they achieved both goals they had set out to achieve: reduced cost and accelerated development.

There are two Dockers. There is the Docker container, which is a standard, open source way to package a piece of software in a filesystem with everything that piece of software needs to run: code, runtime, system tools, system libraries, etc. There is also Docker Inc., the company that created the open source technology and that has built a substantial set of tools for developers and IT teams to build, test, and deploy applications using Docker containers.

In the sense that a container can contain an application that can be moved from one host to another, Docker containers are similar to VMs. Docker argues that they are a more efficient, lighter-weight way to package software than VMs, since each VM has its own OS instance, while Docker runs on top of a single OS, and countless individual containers can be spun up in that single environment.

Another big advantage of containers is portability. Because containers are standardized and contain everything the application needs to run, they can reportedly be easily moved from server to server, VM to VM (they can and do run in VMs), cloud to cloud, server to laptop, etc.

Google uses a technology similar to Docker containers to power its services, and many of the world’s largest enterprises have been evaluating and adopting containers since Docker came on the scene about two years ago.

Read more: Docker CEO: Docker’s Impact on Data Center Industry Will Be Huge

RightScale offers a Software-as-a-Service application that helps users manage their cloud resources. It supports all major cloud providers, including Amazon, Microsoft, Google, Rackspace, and IBM SoftLayer, and key private cloud platforms, such as VMware vSphere, OpenStack, and Apache CloudStack.

Its entire platform consists of 52 services that used to run on 1,028 cloud instances. Over the past seven weeks, the engineering team containerized 48 of those services in an initiative they dubbed “Project Sherpa.”

They only migrated 670 cloud instances to Docker containers. That’s how many instances ran dynamic apps. Static apps – things like SQL databases, Cassandra rings, MogoDB clusters Redis, Memcached, etc. – wouldn’t benefit much from switching to containers, Miller wrote.

The instances running static apps now support containers running dynamic apps in a hybrid environment. “We believe that this will be a common model for many companies that are using Docker because some components (such as storage systems) may not always benefit from containerization and may even incur a performance or maintenance penalty​ if containerized,” he wrote.

As a result the number of cloud instances running dynamic apps was reduced by 55 percent and the cloud infrastructure costs of running those apps came down by 53 percent on average.

RightScale has also already noticed an improvement in development speed. Standardization and portability containers offer help developers with debugging, working on applications they have no experience with, and flexibility in accessing integration systems. Product managers can check out features that are being developed without getting developers involved.

“There are certainly more improvements that we will make in our use of Docker, but we would definitely consider Project Sherpa a success based on the early results, Miller wrote.

Cisco suggests network administration is fun and games

The content below is taken from the original (Cisco suggests network administration is fun and games), to continue reading please visit the site. Remember to respect the Author & Copyright.

Cisco’s trying to get into the games business.

A recent trademark application for ”Cisco Geek Factor” suggests the Borg wants to brand its own “Computer game programs; computer game software for use on mobile and cellular phones.”

United States Patent and Trademark Office filings are very brief: the only other information we have to go on is that the Trademark will cover the following:

“Entertainment services, namely, providing online computer games; providing an on-line computer game in the field of information technology and computer networking.”

That hardly sounds like something kids are going to avoid homework to play.

Cisco is, however, nearly always looking for a way to get more folks learning how to to wrangle its routers and sling its switches. Young people, your forty-something correspondent understands, quite like games. Might The Borg be cooking up some kind of edu-tainment that millennials can experience on their smartphones? Or is Cisco gamifying the experience of network management, turning PING PING PING into PEW PEW PEW? ®

Sponsored:
Implementing high availability and disaster recovery in IBM PureApplication systems V2

Portal router aims to deliver us from congested WiFi

The content below is taken from the original (Portal router aims to deliver us from congested WiFi), to continue reading please visit the site. Remember to respect the Author & Copyright.

What happens when former Qualcomm engineers decide to build a router of their own? You get something like Portal, an innocuous looking device that aims to speed up WiFI networks using technology never before seen in consumer routers. It supports 802.11AC WiFi, but it works on all six channels of the 5GHz spectrum, whereas today’s routers only work on two channels. That’s a big deal — it means Portal is well-suited to delivering fast WiFi in places like dense apartment buildings.

It used to be that simply hopping onto a 5Ghz network was enough to avoid the overcrowding in the 2.4GHz spectrum. But with more people upgrading their routers, even speedy 5GHz spectrum is getting filled up today.

"The fundamental problem is that as WiFi becomes more popular and applications becomes more demanding, your problem is not going to be ‘how fast does my router go?’," said Terry Ngo, CEO and co-founder of Ignition Design Labs, the company behind Portal. Instead, the real issue will become, "How does it survive in an increasingly connected environment?"

Portal uses a combination of features to deal with that dilemma. For one, it packs in nine antennas inside of its sleek, curved case, as well as 10 "advanced" radios. Ngo points out that there’s no need for giant bug-like antennas we’re seeing on consumer routers today, like Netgear’s massive Nitehawk line. (Most of those long antennas are usually just empty plastic.) The Portal is also smart enough to hop between different 5Ghz channels (check out a diagram of channels above) if it detects things are getting crowded. Most routers today pick a channel when they boot up and never move off of it.

In a brief demonstration, Ngo and his crew showed off just how capable the Portal is. While standing around 50 feet away from the router, with a few walls between us, the Portal clocked in 25 Mbps download speeds and 5 Mbps uploads, with a latency of around 3ms. In comparison, a Netgear Nitehawk router saw download speeds of 2Mbps and upload speeds of 5Mbps from the same location, with 30ms of latency.

You’d still be able to stream 4K video streams from the Portal in that spot, whereas the Netgear might even give you trouble with an HD stream, depending on how congested the reception is. Portal was also able to stream three separate 4K videos at once, and, surprisingly, they didn’t even skip when the router changed wireless channels.

One of Portal’s features is particularly surprising: radar detection. That’s necessary to let it use a part of the 5GHz spectrum typically reserved for weather systems in the US. Most devices just avoid that spectrum entirely to avoid the ire of the FCC. By implementing continuous radar detection, Portal is able to turn off access to that spectrum for the few people who live near weather radars (usually, it’s only near airports and certain coastal areas). But even if you’re locked out from that bit of spectrum, Portal still gives you three more 5Ghz channels than other consumer routers.

Just like Google’s OnHub router, which is also trying to solve our WiFi woes, Portal also relies on the cloud to optimize your network. For example, Portal will be able to know in advance if certain locations won’t have access to the 5Ghz spectrum reserved for radar. It’ll also be able to keep track of how crowded WiFi channels get in your neighborhood, and it could optimize which channels are being used at different times of the day. There’s a bit of a privacy concern there, for sure, but using the cloud also lets Ignition Design Labs bring new wireless features to Portal without the need for expensive hardware.

Portal also includes five gigabit Ethernet ports, as well as two USB ports for streaming your content. That’s a notable difference from OnHub, which limited Ethernet ports in favor of a simpler design. Ignition Design Labs has also developed a mobile app for setting up and managing Portal, but you can also log onto its setup page just like a typical router.

While new routers like the Eero and Luma are great WiFi solutions for large homes, where reception range is a bigger issue, the Portal makes more sense for people living in apartments and other dense areas. But Portal also has an extended range solution, if you need it: You can just connect two units together in a mesh network (the company claims it also does this more efficiently than Eero and Luma).

Portal is launching on Kickstarter today, with the hopes of raising $160,000 over the next 60 days. You can snag one for yourself starting at $139, but, as usual, expect the final retail price to be higher. While I’m not very confident about gadget Kickstarters these days, the fact that the Ignition Design Labs folks have many years of experience dealing with wireless hardware gives me hope for the Portal. We’ll be getting a unit to test out soon, so stay tuned for updates.

Designing a DMZ for Azure Virtual Machines

The content below is taken from the original (Designing a DMZ for Azure Virtual Machines), to continue reading please visit the site. Remember to respect the Author & Copyright.

tiny-people-working-on-computer-hero-img

tiny-people-working-on-computer-hero-img

This article will show you three designs, each building on the other, for a demilitarized zone (DMZ) or perimeter network for Internet facing n-tier applications based on Azure virtual machines and networking.

The DMZ

The concept of a DMZ or perimeter network is not new; it’s a classic design that uses a layered network security approach to minimize the attack footprint of an application.

In a simple design:

  1. Web servers are placed in one VLAN, with just TCP 80 and TCP 443 accessible from the Internet.
  2. Application servers are in another VLAN. Web servers can communicate with the application servers using just the application protocol. The Internet has no access to this VLAN.
  3. Database servers are in a third VLAN. Application servers can communicate with the database servers using only the database communications protocol. Web servers and the Internet have no access to this VLAN.

You can modify this design in many ways, including:

  • Adding additional application layer security.
  • Including reverse proxies.
  • Using logical implementations of multiple VLANs by using other methods of network isolation, such as network security groups (NSGs) in Azure.
The concept of a DMZ with n-tier applications (Image Credit: Aidan Finn)

The concept of a DMZ with n-tier applications (Image Credit: Aidan Finn)

So how do you recreate this concept in Azure for virtual machines? I’ll present you with three designs from Microsoft, each of which builds on the concepts of the previous ones.

Network Security Groups

The first and simplest way to build a DMZ in Azure is to use network security groups (NSGs). An NSG is a five-tuple rule that will allow or block TCP or UDP traffic between designated addresses on a virtual network.

You can deploy an n-tier solution into a single virtual network that is split into two or more subnets; each subnet plays the role of a VLAN, as shown above. NSG rules are then created to restrict network traffic. In the below diagram, NSGs will:

  • Allow web traffic into the FrontEnd subnet.
  • All application traffic to flow from the FrontEnd subnet to the BackEnd subnet.
  • Block all other traffic.
A DMZ using Azure network security groups (Image Credit: Microsoft)

A DMZ using Azure network security groups (Image Credit: Microsoft)

The benefit of this design is that it is very simple. The drawback of this design is that it assumes that your potential hackers are stuck in the 1990s; a modern attack tries to compromise the application layer. A port scan of the above from an external point will reveal that TCP 80/443 are open, so an attacker will try to attack those ports. A simple five-tuple rule will not block that traffic, so the hacker can either flood the target with a DDOS attack or compromise application vulnerabilities.

NSGs and a Firewall

Modern edge network devices can protect and enhance hosted applications with applications layer scanning and/or reverse proxy services. The Azure Marketplace allows you to deploy these kinds of devices from multiple vendors into your Azure virtual networks.

The following design below uses a virtual network appliance to protect an application from threats; this offers more than just simple protocol filtering because the appliance understands the allowed traffic and can identify encapsulated risks.

Using a firewall virtual appliance with NSGs to create a DMZ (Image Credit: Microsoft)

Using a firewall virtual appliance with NSGs to create a DMZ (Image Credit: Microsoft)

Sponsored

NSGS are deployed to enforce that all communications from the Internet must flow through the virtual appliance. NSGs will also control the protocols and ports that are allowed for internal communications between the subnets.

Ideally, we’d like to have all communications inside of the virtual network to flow through the virtual appliance, but the default routing rules of the network will prevent this from happening.

User Defined Routes, NSGs, and a Firewall

We can override the default routes of a virtual network using user-defined routes (UDRs). The following design uses one subnet in a single virtual network for each layer of the n-tier application. An additional subnet is created just for the virtual firewall appliance, which will secure the application.

UDRs are created to override the default routes between the subnets, forcing all traffic between subnets to route via the virtual firewall appliance. NSGs are created to enforce this routing and block traffic via the default routes.

An Azure DMZ made from user-defined routes, a virtual appliance firewall and NSGs (Image Credit: Microsoft)

An Azure DMZ made from user-defined routes, a virtual appliance firewall and NSGs (Image Credit: Microsoft)

The result is a DMZ where the virtual appliance controls all traffic to/from the Internet and between the subnets.

Sponsored

Tip: Try to use a next generation firewall and compliment this with defense with additional security products that will work with the Azure Security Center so that you have a single view of all trends and risks.

The post Designing a DMZ for Azure Virtual Machines appeared first on Petri.

DARPA is building acoustic GPS for submarines and UUVs

The content below is taken from the original (DARPA is building acoustic GPS for submarines and UUVs), to continue reading please visit the site. Remember to respect the Author & Copyright.

For all the benefits that the Global Positioning System provides to landlubbers and surface ships, GPS signals can’t penetrate seawater and therefore can’t be used by oceangoing vehicles like submarines or UUVs. That’s why DARPA is creating an acoustic navigation system, dubbed POSYDON (Positioning System for Deep Ocean Navigation), and has awarded the Draper group with its development contract.

The space-based GPS system relies on a constellation of satellites that remain in a fixed position relative to the surface of the Earth. The GPS receiver in your phone or car’s navigation system triangulates the signals it receives from those satellites to determine your position. The POSYDON system will perform the same basic function, just with sound instead. The plan is to set up a small number of long-range acoustic sources that a submarine or UUV could use to similarly triangulate its position without having to surface.

The system should be ready for sea trials by 2018. It will initially be utilized exclusively for military and government operations but, like conventional GPS before it, will eventually be opened up to civilians as well.

Cujo is a firewall for the connected smart home network

The content below is taken from the original (Cujo is a firewall for the connected smart home network), to continue reading please visit the site. Remember to respect the Author & Copyright.

“Cujo protects everything on your network,” the company’s CEO, Einaras Gravrock says, describing his product in the simplest terms possible ahead of its Disrupt NY launch this week. “Think of it as an immunity system for your network.”

The Cujo is surprisingly unassuming, a small plastic stump with light up eyes that stands in adorable contrast to its mad dog name and home security mission statement.

The product is designed to bring the enterprise-level security to the home network, helping to protect against attacks to the increasingly vulnerable world of networked devices, from laptops to smart light bulbs.

“Cujo is, for all intents and purposes, a smart firewall,” explains Gravrock. “It’s very seamless. It’s made for an average user to understand easily. You see every single thing on your network through your app. If you got to bad places or bad things come to you, we will block bad behavior and we will send you a friendly notification that someone tried to access your camera.”

  1. CUJO 3 x 1

  2. CUJO at night

  3. CUJO Smart Data Security Device – SMART WAY TO FIGHT HOME HACKING – 1

  4. CUJO Smart Data Security Device – SMART WAY TO FIGHT HOME HACKING 4

  5. CUJO Smart Data Security Device – SMART WAY TO FIGHT HOME HACKING 8

  6. cujo-darkdesk

The company demoed the product at Disrupt today by hacking a baby camera. On a page displaying all of the devices connected to the network, a warning popped up: “We blocked an unauthorized attempt to access device ‘IP camera’ from [IP number].” From there access to the feed can be cut off – or not, if there is no actual threat.

The $99 device (plus an $8.99 monthly service fee for unlimited devices) serves as a peer to a home router, monitoring all network connected devices for malicious activity and sending notifications when something happens, like suspicious file transfers or communications with far away IP addresses. It’s a bit like the Nest app, only for networked security, rather than fire alarms.

Gravrock stresses that exploits are less about individual devices than they are about opening up the entire network through a small and seemingly harmless smart gadget. “You may think, ‘so what, my light bulb is going to get hacked,’ ” the executive explains. “The challenge is what happens next. Once they’re in the network, they can get to the other devices. They can get to your camera, they can get to your PC and extract files, they can film you. The FBI director is on records as taping over his webcam when he goes home. That tells you that we’re very exposed.”

Part of the company’s current mission is highlighting those exploits for consumers who are likely versed in the threat of PC malware but may be unaware of the growing threat posed by the vulnerability of the Internet of Things.

Though Gravrock adds that in the beta testing the company has been conducting since August, consumer interest/concern had increased notably.

“We’ve sold about 5,000 units directly already,” he explains. “The biggest surprise for me has been that it’s your average user who no longer feels private at home, may put the duct tape over his webcam and just wants something that works — doesn’t want to spend days and months changing things.”

HPE Cloud Optimizer Overview

Wouldn’t you like to optimize capacity and troubleshoot performance issues in any virtual and cloud environment? HPE Cloud Optimizer enables comprehensive capacity planning and management with quick and efficient performance monitoring, troubleshooting and optimization of physical, virtual and cloud environments. Start a free trial of HPE Cloud Optimizer: http://bit.ly/1ZzPix3

Subscribe for more videos like this: http://bit.ly/1T0FGLd

Visit our website: https://www.hpe.com

What is Hybrid Infrastructure? Glad you asked…

The content below is taken from the original (What is Hybrid Infrastructure? Glad you asked…), to continue reading please visit the site. Remember to respect the Author & Copyright.

As part of its recent split, Hewlett Packard Enterprise announced “four areas of transformation”, among them the buzzword-heavy “hybrid infrastructure”.

But what exactly is hybrid infrastructure? Each company seems to have a different idea of what it could mean. What does HPE mean when they say “hybrid infrastructure”? How does this differ from other definitions?

The official blurb from official website says “Today’s broad variety of apps and data require different delivery models for each business outcome. A hybrid infrastructure combines traditional IT, private, managed and public clouds, so you can enable your right mix to power 100% of the workloads that drive your enterprise.”

Let us decode. “A hybrid infrastructure combines traditional IT, private, managed and public clouds” is straightforward. HPE is clearly separating traditional IT from private clouds. It also separates service provider (managed) clouds from public clouds. This is the split that I personally use and advocate,so I think we’re off to a grand start.

A little bit of digging shows that HPE agrees with my nomenclature – mostly. To HPE, a cloud is emphatically not “just virtualization”. A public cloud is emphatically not “just someone else’s computer”. The trite sayings of the disaffected sysadmin ancien d’hier are neatly rejected. This leaves us with an important definition for cloud.

A cloud – public, private or managed – is a pool of IT resources – virtualized, containerized and/or physical – that can be provisioned by resource consumers using a self-service portal and/or an API. As I have advocated for some time, you don’t get to say you have a cloud just because you managed to install ESXi on a couple of hosts and run the vSphere virtual appliance.

HPE also says on its ‘hybrid infrastructure’ website: ” Today’s broad variety of apps and data require different delivery models for each business outcome.” This is the slightly more dog whistle part of the marketing and what it means is that HPE wants to sell you services to help you pick the right infrastructure to run your applications on, get your applications onto that infrastructure and then teach you how to keep it all going.

Hybrid infrastructure in practice

For HPE watchers, none of this should be particularly surprising. HPE has come out with an excellent Azure in a can offering for the midmarket and is also working on its new Synergy strategy, aimed at larger organizations. It’s not hard to see why: cloud expenditure has reached 110 billion dollars a year and doesn’t look set to slow down any time soon.

Both technology pushes can credibly be called part separate and distinct hybrid infrastructure plays. HPE has also made some solid bets on its Helion private cloud technology and services, including one with Deutsche Bank where HPE lose money if the cloud provided doesn’t meet expectations.

This is a pretty big change for HPE, which for decades was predominantly a tin-shifter that dabbled in software and services. HPE’s experiment in running its own public cloud was not a success. It never became a software, services or Google-esque data-hoovering power house. And tin-shifting margins everywhere are evaporating.

Everyone knows how to do traditional IT, so there’s no real money to be made in helping people do it. Not so with things cloudy: few organisations have to skills to build, manage and maintain a cloud.

Fewer still who have made the cultural and business practices changes necessary to take advantage of cloud computing’s self service, automation and orchestration so that they can see the very real benefits it has to offer over traditional IT. There are only a handful of organisations who can then make a private cloud work with managed clouds and public clouds, finally reaching that promised for utopia of true utility computing.

Sounds like a business opportunity to me.

Of course, it’s easy to mock and jeer. So many vendors tout “hybrid cloud” this or “hybrid cloud” that and ultimately delivering nothing of the sort.

Of all companies, will it really be HPE that manages to deliver private clouds that connect up to public and managed clouds, move workloads back and forth and do so without breaking the bank? As astonishing as it sounds, yes.

HPE has been doing this for some time now. I am unaware if they can claim a huge number of wins, but I’ve taken the time to talk to several Helion private cloud customers and they have done nothing but gush contentment about it. HPE has been enabling Helion private cloud customers to move workloads off of the private cloud in several instances and the results also seem to be quite promising.

In short, HPE has done this hybrid cloud infrastructure in practice already. It has done it successfully and is now ready to train its workforce more broadly in the tools, techniques and lessons learned. As I see it, HPE has settled on the right strategy. It has built up a portfolio of technologies and experience that support this strategy and has a nice list of customer wins to go to market with.

Seemingly, it’s only major vendor – so far – muttering “hybrid” that actually seems to know what it is doing. Let’s see how that translates into execution. ®

Sponsored:
Designing and building an open ITOA architecture

Arduino Srl adds wireless-ready “Arduino Uno WiFi”

The content below is taken from the original (Arduino Srl adds wireless-ready “Arduino Uno WiFi”), to continue reading please visit the site. Remember to respect the Author & Copyright.

Arduino Srl unveiled a version of the Arduino Uno that adds onboard WiFi via an ESP8266 module, but otherwise appears to be identical. Not much has been heard from Arduino Srl, located at Arduino.org, since it forked off from the main Arduino group over a year ago. In April, the rival Arduino LLC released an […]

OpenStack Developer Mailing List Digest April 23 – May 6

The content below is taken from the original (OpenStack Developer Mailing List Digest April 23 – May 6), to continue reading please visit the site. Remember to respect the Author & Copyright.

Success Bot Says

  • Sdague: nova-network is deprecated [1]
  • Ajaeger: OpenStack content on Transifex has been removed, Zanata on translate.openstack.org has proven to be stable platform for all translators and thus Transifex is not needed anymore.
  • All

Backwards Compatibility Follow-up

  • Agreements from recent backwards compatibility for clients and libraries session:
    • Clients need to talk to all versions of OpenStack. Clouds.
    • Oslo libraries already do need to do backwards compatibility.
    • Some fraction of our deploys between 1% to 50% are trying to do in place upgrades where for example Nova is upgrade, and Neutron later. But now Neutron has to work with the upgraded libraries from the Nova upgrade.
  • Should we support in-place upgrades? If we do, we need at least 1 or more versions of compatibility where Mitaka Nova can run Newton Oslo+client libraries.
    • If we don’t support in-place upgrades, deployment methods must be architected to avoid ever encountering where a client or one of N services is going to be upgraded on a single python environment. All clients and services must be upgraded together on a single python environment or none.
  • If we decide to support in-place upgrades, we need to figure out how to test that effectively; its a linear growth with the number of stable releases we choose to support.
  • If we decide not to, we have no further requirement to have any cross-over compatibility between OpenStack releases.
  • We still have to be backwards compatible on individual changes.
  • Full thread

Installation Guide Plans for Newton

  • Continuing from a previous Dev Digest [2], big tent is growing and our documentation team would like for projects to maintain their own installation documentation. This should be done while still providing quality in valid working installation information and consistency the team strives for.
  • The installation guide team held a session at the summit that was packed and walked away with some solid goals to achieve for Newton.
  • Two issues being discussed:
    • What to do with the existing install guide.
    • Create a way for projects to write installation documentation in their own repository.
  • All guides will be rendered from individual repositories and appear in docs.openstack.org.
  • The Documentation team has recommendations for projects writing their install guides:
    • Build on existing install guide architecture, so there is no reinventing the wheel.
    • Follow documentation conventions [3].
    • Use the same theme called openstackdocstheme.
    • Use the same distributions as the install guide does. Installation from source is an alternative.
    • Guides should be versioned.
    • RST is the preferred documentation format. RST is also easy for translations.
    • Common naming scheme: “X Service Install Guide” – where X is your service name.
  • The chosen URL format is http://bit.ly/1ZywxtK.
  • Plenty of work items to follow [4] and volunteers are welcome!
  • Full thread

Proposed Revision To Magnum’s Mission

  • From a summit discussion, there was a proposed revision to Magnum’s mission statement [5].
  • The idea is to narrow the scope of Magnum to allow the team to focus on making popular container orchestration engines (COE) software work great with OpenStack. Allowing users to setup fleets of cloud capacity managed by COE’s such as Swarm, Kubernetes, Mesos, etc.
  • Deprecate /containers resource from Magnum’s API. Any new project may take on the goal of creating an API service that abstracts one or more COE’s.
  • Full thread

Supporting the Go Programming Language

  • The Swift community has a git branch feature/hummingbird that contains some parts of Swift reimplemented in Go. [6]
  • The goal is to have a reasonably read-to-merge feature branch ready by the Barcelona summit. Shortly after the summit, the plan is to merge the Go code into master.
  • An amended Technical Committee resolution will follow to suggest Go as a supported language in OpenStack projects [7].
  • Some Technical Committee members have expressed wanting to see technical benefits that outweigh the community fragmentation and increase in infrastructure tasks that result from adding that language.
  • Some open questions:
    • How do we run unit tests?
    • How do we provide code coverage?
    • How do we manage dependencies?
    • How do we build source packages?
    • Should we build binary packages in some format?
    • How to manage in tree documentation?
    • How do we handle log and message string translations?
    • How will DevStack install the project as part of a gate job?
  • Designate is also looking into moving a single component into Go.
    • It would be good to have two cases to help avoid baking any project specific assumptions into testing and building interfaces.
  • Full thread

Release Countdown for Week R-21, May 9-13

  • Focus
    • Teams should be focusing on wrapping up incomplete work left over from the end of the Mitaka cycle.
    • Announce plans from the summit.
    • Completing specs and blueprints.
  • General Notes
    • Project teams that want to change their release model tag should do so before the Newton-1 milestone. This can be done by submitting a patch to governance repository in the projects.yaml file.
    • Release announcement emails are being proposed to have their tag switched from “release” to “newrel” [8].
  • Release Actions
    • Release liaisons should add their name to and contact information to this list [9].
    • Release liaisons should have their IRC clients join #openstack-release.
  • Important Dates
    • Newton 1 Milestone: R-18 June 2nd
    • Newton release schedule [10]
  • Full thread

Discussion of Image Building in Trove

  • A common question the Trove team receives from new users is how and where to get guest images to experiment with Trove.
    • Documentation exists in multiple places for this today [11][12], but things can still be improved.
  • Trove has a spec proposal [13] for using libguestfs approach to building images instead of using the current diskimage-builder (DIB).
    • All alternatives should be equivalent and interchangable.
    • Trove already has elements for all supported databases using DIB, but these elements are not packaged for customer use. Doing this would be a small effort of providing an element to install the guest agent software from a fixed location.
    • We should understand the deficiencies if any in DIBof switching tool chains. This can be be based on Trove and Sahara’s experiences.
  • The OpenStack Infrastructure team has been using DIB successfully for a while as it is a flexible tool.
    • By default Nova disables file injection [14]
    • DevStack doesn’t allow you to enable Nova file injection, and hard sets it off [15].
    • Allows to bootstrap with yum of debootstrap
    • Pick the filesystem for an existing image.
  • Lets fix the problems with DIB that Trove is having and avoid reinventing the wheel.
  • What are the problems with DIB, and how do they prevent Trove/Sahara users from building images today?
    • Libguestfs manipulates images in a clean helper VM created by libguestfs in a predictable way.
      • Isolation is something DIB gives up in order to provide speed/lower resource usage.
    • In-place image manipulation can occur (package installs, configuration declarations) without uncompressing or recompressing an entire image.
      • It’s trivial to make a DIB element which modifies an existing image and making it in-place.
    • DIB scripts’ configuration settings passed in freeform environment variables can be difficult to understand document for new users. Libguestfs demands more formal formal parameter passing.
    • Ease of “just give me an image. I don’t care about twiddling knobs”.
      • OpenStack Infra team already has a wrapper for this [16].
  • Sahara has support for several image generation-related cases:
    • Packing an image pre-cluster spawn in Nova.
    • Building clusters from a “clean” operating system image post-Nova spawn.
    • Validating images after Nova spawn.
  • In a Sahara summit session, there was a discussed plan to use libguestfs rather than DIB with an intent to define a linear, idempotent set of steps to package images for any plugin.
  • Having two sets of image building code to maintain would be a huge downside.
  • What’s stopping us a few releases down the line deciding that libguestfs doesn’t perform well and we decide on a new tool? Since DIB is an OpenStack project, Trove should consider support a standard way of building images.
  • Trove summit discussion resulted in agreement of advancing the image builder by making it easier to build guest images leveraging DIB.
    • Project repository proposals have been made [17][18]
  • Full thread

 

OpenStack VDI: The What, the Why, and the How

The content below is taken from the original (OpenStack VDI: The What, the Why, and the How), to continue reading please visit the site. Remember to respect the Author & Copyright.

Karen Gondoly is CEO of Leostream.

Moving desktops out from under the users’ desks and into the data center is no longer a groundbreaking concept. Virtual Desktop Infrastructure (VDI) and its cousin, Desktops-as-a-Service (DaaS) have been around for quite sometime and are employed to enable mobility, centralize resources, and secure data.

For as long as VDI has been around, so have industry old-timers VMware and Citrix — the two big players in the virtual desktop space. But, as Bob Dylan would say, the times, they are a-changing.

OpenStack has been climbing up through the ranks, and this newcomer is poised for a slice of the VDI pie. If you’re looking for an alternative to running desktops on dedicated hardware in the data center, open source software may be the name of the game.

What is OpenStack?

OpenStack, an open source cloud operating system and community founded by Rackspace and NASA, has graduated from a platform used solely by DevOps to an important solution for managing entire enterprise-grade data centers. By moving your virtual desktop infrastructure (VDI) workloads into your OpenStack cloud, you can eliminate expensive, legacy VDI stacks and provide cloud-based, on-demand desktops to users across your organization. Consisting of over ten different projects, OpenStack hits on several of the major must-haves to deliver VDI and/or Desktops-as-a-Service (DaaS), including networking, storage, compute, multi-tenancy, and cost control.

Why VDI and Why OpenStack?

Generally speaking, the benefits of moving users’ desktops into the data center as part of a virtual desktop infrastructure are well documented: your IT staff can patch and manage desktops more efficiently; your data is secure in the data center, instead of on the users’ clients; and your users can access their desktop from anywhere and from any device, supporting a bring-your-own-device initiative.

Many organizations considered moving their workforce to VDI, only to find that the hurdles of doing so outweighed the benefits. The existing, legacy VDI stacks are expensive and complicated, placing VDI out of reach for all but the largest, most tech-savvy companies.

By leveraging an OpenStack cloud for VDI, an organization reaps the benefits of VDI at a much lower cost. And, by wrapping VDI into the organization’s complete cloud strategy, IT manages a single OpenStack environment across the entire data center, instead of maintaining separate stacks and working with multiple vendors.

How to Leverage OpenStack Clouds for Virtual Desktops

Now, “simplification” is not a benefit for building OpenStack VDI and DaaS. If you’re not an OpenStack expert, then you may want to partner with someone who is. Companies like SUSE, Mirantis, Canonical, and Cisco Metapod, can help ease your migration to the cloud. Keep in mind that your hosted desktop environment will need to be resistant to failure and flexible enough to meet individual user needs.

So, if you’re really serious about VDI/DaaS, then you’ll need to leverage a hypervisor, display protocol, and a connection broker. A recent blueprint dives into the details of the solution components and several important usability factors.

Here’s the Reader’s Digest version:

  • Hypervisor: A hypervisor allows you to host several different virtual machines on a single hardware. KVM is noted in the OpenStack documentation as being the most highly tested and supported hypervisor for OpenStack. To successfully manage VDI or DaaS, the feature sets provided by any of the hypervisors are adequate.
  • Display Protocol: A display protocol provides end users with a graphical interface to view a desktop that re- sides in the data center or cloud. Some of the popular options include Teradici PCoIP, HP RGS, or Microsoft RDP.
  • Connection Broker: A connection broker focuses on desktop provisioning and connection management. It also provides the interface that your end users will use to log in. The key in choosing a connect broker is to ensure that it integrates with the OpenStack API. That API allows you to inventory instances in OpenStack. These instances are your desktops. It also makes it easy to provision new instances from existing images, and assigns correct IP addresses to instances.

How do you bring everything together? The process can be summarized into four basic steps.

  1. First, you’ll want to determine the architecture for your OpenStack Cloud. As mentioned, there are a number of solid experts that can help you with this step, if you’re not an expert yourself.
  2. Then as you onboard new groups of users, make sure to place each in their own OpenStack project, which means defining the project and the network.
  3. Next, you’ll want to build a master desktop and image, which can be used to streamline the provisioning of desktops to users. At this stage, you’ll want to explore display protocols and select a solution(s) that delivers the performance that your end-users need.
  4. The final step is to configure your connection broker to manage the day-to-day activities.

Conclusion and Takeaways

When it comes to leveraging OpenStack clouds to host desktops, there’s a lot to think about and several moving parts. For those looking outside the box of traditional virtualization platforms, OpenStack may be your golden ticket. Key to delivering desktops is choosing an adequate display protocol and connection broker.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

Kaseya Announces Traverse 9.3 Enabling MSPs to Manage Complex Public and Hybrid Cloud-Based Apps Running on Amazon Cloud

The content below is taken from the original (Kaseya Announces Traverse 9.3 Enabling MSPs to Manage Complex Public and Hybrid Cloud-Based Apps Running on Amazon Cloud), to continue reading please visit the site. Remember to respect the Author & Copyright.

Kaseya , the leading provider of complete IT management solutions for Managed Service Providers (MSPs) and small to midsized businesses, today… Read more at VMblog.com.