Simplifying your environment setup while meeting compliance needs with built-in Azure Blueprints

The content below is taken from the original ( Simplifying your environment setup while meeting compliance needs with built-in Azure Blueprints), to continue reading please visit the site. Remember to respect the Author & Copyright.

I’m excited to announce the release of our first Azure Blueprint built specifically for a compliance standard, the ISO 27001 Shared Services blueprint sample which maps a set of foundational Azure infrastructure, such as virtual networks and policies, to specific ISO controls.

Microsoft Azure leads the industry with over 90 compliance offerings. Azure meets a broad set of international and industry-specific compliance standards, such as General Data Protection Regulation (GDPR), ISO 27001, HIPAA, PCI, SOC 1 and SOC 2, as well as country-specific standards, including FedRAMP and other NIST 800-53 derived standards, Australia IRAP, UK G-Cloud, and Singapore MTCS. Many of our customers have expressed their interest in being able to leverage and build upon our internal compliance practices for their environments with a service that maps compliance settings automatically.

To help our customers simplify the creation of their environments in Azure while successfully interpreting US and international governance requirements, we are announcing a series of built-in Blueprints Architectures that can be leveraged during your cloud-adoption journey. Azure Blueprints is a free service that helps customers deploy and update cloud environments in a repeatable manner using composable artifacts such as policies, deployment templates, and role-based access controls. This service is built to help customers set up governed Azure environments and can scale to support production implementations for large-scale migrations.

The ISO 27001 Shared Services Blueprint is already available to your Azure tenant. Simply navigate to the Blueprints page, click “Create blueprint”, and choose the ISO27001 Shared Services blueprint from the list.

Creating a blueprint in Azure portal by selecting a template

The ISO 27001 blueprint is designed to help you deploy production ready, secure end-to-end solutions in one click and includes:

  • Hardened infrastructure resources: Azure Resource Manager templates are used to automatically deploy the components of the architecture into Azure by specifying configuration parameters during setup. The infrastructure components include Azure Firewall, Active Directory, Key Vault, Azure Monitor, Log Analytics, Virtual Networks with subnets, Network Security Groups, and Role Based Access Control definitions. Additionally, these resources can be locked by Blueprints as a security measure to protect the consistency of the defined blueprint and the environment it was designed to create.
  • Policy controls: Set of Azure policies that help provide real-time enforcement, compliance assessment, and remediation.
  • Proven virtual datacenter architectures: The infrastructure resources provided are based on the Microsoft approved virtual datacenter (VDC) architectures which take into consideration scale, performance, security, and governance.
  • Security and compliance controls: You still benefit from all the controls for which Microsoft is responsible as your cloud provider, and now this blueprint helps you configure a number of the remaining controls to meet ISO 27001 requirements.
  • Documentation: Step by step deployment guide outlining the shared services infrastructure and the policy control mapping matrix.
  • Migration runway: Provides a prescriptive set of instructions for deploying an Azure recommended foundation to accelerate migrations via the Azure migration center.

At Microsoft, we are committed to helping our customers leverage Azure in a secure and compliant manner. Over the next few months you will continue to see new built-in blueprints released for HITRUST, PCI DSS, UK National Health Service (NHS) Information Governance (IG) Toolkit, FedRAMP, and Center for Internet Security (CIS) Benchmark. If you would like to participate in any early previews please sign up, or if have a suggestion for a compliance blueprint, please share it via the Azure Governance Feedback Forum.

Learn more about the Azure ISO 27001 Blueprints.

How to Schedule Your Day When You Freelance or Work From Home

The content below is taken from the original ( How to Schedule Your Day When You Freelance or Work From Home), to continue reading please visit the site. Remember to respect the Author & Copyright.

I’ve been a full-time freelancer since 2012, and most of my work gets done at home—that is, from my home office. I tried working from coffee shops and co-working spaces, but I tend to get the most work done when I’m in a quiet, comfortable, familiar space where I don’t have to worry about whether I’ll be able to find…

Read more…

Google Cloud named a leader in the Forrester Wave: Big Data NoSQL

The content below is taken from the original ( Google Cloud named a leader in the Forrester Wave: Big Data NoSQL), to continue reading please visit the site. Remember to respect the Author & Copyright.

We’re pleased to announce that Forrester has named Google Cloud as a leader in The Forrester Wave™: Big Data NoSQL, Q1 2019. We believe the findings reflect Google Cloud’s market momentum, and what we hear from our satisfied enterprise customers using Cloud Bigtable and Cloud Firestore.  

According to Forrester, half of global data and analytics technology decision makers either have implemented or are implementing NoSQL platforms, taking advantage of the benefits of a flexible database that serves a broad range of use cases. The report evaluates the top 15 vendors against 26 rigorous criteria for NoSQL databases to help enterprise IT teams understand their options and make informed choices for their organizations. Google scored 5 out of 5 in Forrester’s report evaluation criteria of data consistency, self-service and automation, performance, scalability, high availability/disaster recovery, and the ability to address a breadth of customer use cases. Google also scored 5 out of 5 in the ability to execute criterion.

How Cloud Firestore and Cloud Bigtable work for users

We’re especially pleased that our recognition as a Leader in the Forrester Wave: Big Data NoSQL mirrors what we hear from our customers: Databases have an essential role to play in a cloud infrastructure. The best ones can make application development easier, make user experience better, and allow for massive scalability. Both Cloud Firestore and Cloud Bigtable include recently added features and updates to continue our mission of providing flexible database options.

Cloud Firestore is our fully managed, serverless document database that recently became generally available. It’s designed and built for accelerating web, mobile and IoT apps, since it allows for live synchronization and offline support. Cloud Firestore also brings a strong consistency guarantee and a global set of locations, plus support for automatic sharding, high availability, ACID transactions and more. We’ve heard from Cloud Firestore users that they’ve been able to serve more users and move apps into production faster using the database as a powerful back end.

Cloud Bigtable is our fast, globally distributed, wide-column NoSQL database service that can scale to handle massive workloads. It scales data storage from gigabytes to petabytes, while maintaining high-performance throughput and low-latency response times. It is the same database that powers many Google services such as Search, Analytics, Maps, and Gmail. Customers running apps with Cloud Bigtable can provide users with data updates in multiple global regions thanks to multi-region replication. We hear from Cloud Bigtable users that it lets them provide real-time analytics with availability and durability guarantees to their users and customers. Use cases often include IoT, user analytics, advertising tech and financial data analysis.

Download the full Forrester report here, and learn more about GCP database services here.

See What the World Looked Like When the World Wide Web Was Born

The content below is taken from the original ( See What the World Looked Like When the World Wide Web Was Born), to continue reading please visit the site. Remember to respect the Author & Copyright.

The World Wide Web turned 30 this week, and everyone celebrated the best way they know how—by coming up with big lists of all the internet-related topics we’ve dealt with (or obsessed over) the past few decades. That includes dial-up modems, AOL, avatar- and comic-based chatrooms, and Homestar Runner, which are just a…

Read more…

This robot can park your car for you

The content below is taken from the original ( This robot can park your car for you), to continue reading please visit the site. Remember to respect the Author & Copyright.

French startup Stanley Robotics showed off its self-driving parking robot at Lyon-Saint-Exupéry airport today. While I couldn’t be there in person, the service is going live by the end of March 2019. And here’s what it looks like.

The startup has been working on a robot called Stan. These giant robots can literally pick up your car at the entrance of a gigantic parking lot and then park it for you. You might think that parking isn’t that hard, but it makes a lot of sense when you think about airport parking lots.

Those parking lots have become one of the most lucrative businesses for airport companies. But many airports don’t have a ton of space. They keep adding new terminals and it is becoming increasingly complicated to build more parking lots.

That’s why Stanley Robotics can turn existing parking lots into automated parking areas. It’s more efficient as you don’t need space to circulate between all parking spaces. According to the startup, you can create 50 percent more spaces in the same surface area.

If you’re traveling for a few months, Stan robots can put your car in a corner and park a few cars in front of your car. Stan robots will make your car accessible shortly before you land. This way, it’s transparent for the end user.

At Vinci’s Lyon airport, there will be 500 parking spaces dedicated to Stanley Robotics. Four robots will work day in, day out to move cars around the parking lot. But Vinci and Stanley Robotics already plan to expand this system to up to 6,000 spaces in total.

According to the airport website, booking a parking space for a week on the normal P5 parking lot costs €50.40. It costs €52.20 if you want a space on P5+, the parking lot managed by Stanley Robotics.

Self-driving cars are not there yet because the road is so unpredictable. But Stanley Robotics has removed all the unpredictable elements. You can’t walk on the parking lot. You just interact with a garage at the gate of the parking. After the door is closed, the startup controls the environment from start to finish.

Now, let’s see if Vinci Airports plans to expand its partnership with Stanley Robotics to other airports around the world.

Celebrate Pi Day With Our Favorite Pie Tips

The content below is taken from the original ( Celebrate Pi Day With Our Favorite Pie Tips), to continue reading please visit the site. Remember to respect the Author & Copyright.

Ah, Pi Day. A day that combines two things I love in theory but have more trouble with than I’d like to admit—baking and math. Oddly, I got into cooking by baking pies to help deal with the stress of taking calculus in college, and clearly one stuck a little more than the other. (Hint: My life no longer requires…

Read more…

Here’s Why VMUG Members Attend VMware’s vForum Online, and You Should Too!

The content below is taken from the original ( Here’s Why VMUG Members Attend VMware’s vForum Online, and You Should Too!), to continue reading please visit the site. Remember to respect the Author & Copyright.

By Brad Tompkins Professional Development and continuing education are top of mind for VMUG members. According to the annual VMUG Member Survey, professional development was ranked #1 most important reason for being involved with VMUG over the past two years. One way to keep your technical skills sharp is by attending VMware’s vForum Online on

The post Here’s Why VMUG Members Attend VMware’s vForum Online, and You Should Too! appeared first on VMTN Blog.

What is Ansible?

The content below is taken from the original ( What is Ansible?), to continue reading please visit the site. Remember to respect the Author & Copyright.

What is Ansible? Ansible is an open-source IT automation engine, which can remove drudgery from your work life, and will also dramatically improve the scalability, consistency, and reliability of your IT environment. We’ll start to explore how to automate repetitive system administration tasks using Ansible, and if you want to learn more, you can go much deeper into how to use Ansible with Cloud Academy’s new Introduction to Ansible learning path.

What is Ansible and what can it automate?

You can use Ansible to automate three types of tasks:

  • Provisioning: Set up the various servers you need in your infrastructure.
  • Configuration management: Change the configuration of an application, OS, or device; start and stop services; install or update applications; implement a security policy; or perform a wide variety of other configuration tasks.
  • Application deployment: Make DevOps easier by automating the deployment of internally developed applications to your production systems.

Ansible can automate IT environments whether they are hosted on traditional bare metal servers, virtualization platforms, or in the cloud. It can also automate the configuration of a wide range of systems and devices such as databases, storage devices, networks, firewalls, and many others.

The best part is that you don’t even need to know the commands used to accomplish a particular task. You just need to specify what state you want the system to be in and Ansible will take care of it. For example, to ensure that your web servers are running the latest version of Apache, you could use a playbook similar to the following and Ansible would handle the details.

---
- hosts: webservers
  vars:
    http_port: 80
    max_clients: 200
  remote_user: root
  tasks:
  - name: ensure apache is at the latest version
    yum: name=httpd state=latest
  - name: write the apache config file
    template: src=/srv/httpd.j2 dest=/etc/httpd.conf
    notify:
    - restart apache
  - name: ensure apache is running (and enable it at boot)
    service: name=httpd state=started enabled=yes
  handlers:
    - name: restart apache
      service: name=httpd state=restarted

The line in the above playbook that actually installs or updates Apache is “yum: name=httpd state=latest”. You just specify the name of the software package (httpd) and the desired state (latest) and Ansible does the rest. The other tasks in the playbook update the Apache config file, restart Apache, and enablea Apache to run at boot time. Take a read at one of our previous blog posts on how to build Ansible playbooks.

Why Ansible?

There are many other IT automation tools available, including more mature ones like Puppet and Chef, so why would you choose Ansible? The main reason is simplicity. Michael DeHaan, the creator of Ansible, already had a lot of experience with other configuration management tools when he decided to develop a new one. He said that he wanted “a tool that you could not use for six months, come back to, and still remember.”

DeHaan accomplished this by using YAML, a simple configuration language. Puppet and Chef, on the other hand, use Ruby, which is more difficult to learn. This makes Ansible especially appealing to system administrators.

DeHaan also simplified Ansible deployment by making it agentless. That is, instead of having to install an agent on every system you want to manage (as you have to do with Puppet and Chef), Ansible just requires that systems have Python (on Linux servers) or PowerShell (on Windows servers) and SSH.

What is Ansible? A New Learning Path

Although Ansible is easier to learn than many of the other IT automation engines, you still need to learn a lot before you can start using it. To help you with this, Cloud Academy has released its Introduction to Ansible learning path.

This learning path includes three video courses:

  • What is Configuration Management?: A high-level overview of configuration management concepts and software options.
  • Getting Started With Ansible: Covers everything from Ansible components to writing and debugging playbooks in YAML.
  • Introduction to Managing Ansible Infrastructure: An overview of Ansible Tower (Red Hat’s proprietary management add-on to Ansible) and Ansible Galaxy (a place to find and share Ansible content).

Hands-on practice is critical when learning a new technology, so we have included two labs in the learning path:

Finally, you can test your knowledge of Ansible by taking the quizzes.

Conclusion

Whether you need to make your life easier by automating your administration tasks or you’re interested in becoming a DevOps professional, Ansible is a good place to start. Learn how to streamline your IT operations with Introduction to Ansible.
Introduction to Ansible

 

The post What is Ansible? appeared first on Cloud Academy.

blogged: Commit, Push, Deploy – Git in the Microsoft Azure Cloud

The content below is taken from the original ( blogged: Commit, Push, Deploy – Git in the Microsoft Azure Cloud), to continue reading please visit the site. Remember to respect the Author & Copyright.

https://dev.to/azure/commit-push-deploygit-in-the-microsoft-azure-cloud-297l

It covers how to deploy using git deploy. So essentially keep using Git and add Azure CLI to your tool belt.

submitted by /u/dizabl to r/AZURE
[link] [comments]

IBM’s AI blood test could help with early Alzheimer’s detection

The content below is taken from the original ( IBM’s AI blood test could help with early Alzheimer’s detection), to continue reading please visit the site. Remember to respect the Author & Copyright.

Previous attempts to find a cure for Alzheimer's ended up in failure, but a new study out of IBM Research has the potential to spark a major breakthrough. A group of IBM researchers have harnessed the powers of machine learning to figure out a way to…

Choose UEFI or Legacy BIOS when booting into Windows Setup or Windows PE

The content below is taken from the original ( Choose UEFI or Legacy BIOS when booting into Windows Setup or Windows PE), to continue reading please visit the site. Remember to respect the Author & Copyright.

Choose UEFI or Legacy BIOS when booting into Windows Setup or Windows PE

Choose UEFI or Legacy BIOS when booting into Windows Setup or Windows PECompared to BIOS, Unified Extensible Firmware Interface (UEFI) makes the computer extra secure. If your laptop supports UEFI, you should use it. However, at times, the legacy BIOS is still useful. An example – if you’re booting from a network […]

This post Choose UEFI or Legacy BIOS when booting into Windows Setup or Windows PE is from TheWindowsClub.com.

Microsoft opens first datacenters in Africa with general availability of Microsoft Azure

The content below is taken from the original ( Microsoft opens first datacenters in Africa with general availability of Microsoft Azure), to continue reading please visit the site. Remember to respect the Author & Copyright.

AzureMap_SouthAfrica_GA-final

Today, I am pleased to announce the general availability of Microsoft Azure from our new cloud regions in Cape Town and Johannesburg, South Africa. Nedbank, Peace Parks Foundation, and eThekwini water are just a few of the organizations in Africa leveraging Microsoft cloud services today and will benefit from the increased computing resources and connectivity from our new cloud regions.

The launch of these regions marks a major milestone for Microsoft as we open our first enterprise-grade datacenters in Africa, becoming the first global provider to deliver cloud services from datacenters on the continent. The new regions provide the latest example of our ongoing investment to help enable digital transformation and advance technologies such as AI, cloud, and edge computing across Africa.

By delivering the comprehensive Microsoft Cloud — comprising Azure, Office 365, and Dynamics 365 — from datacenters in a given geography, we offer scalable, available, and resilient cloud services to companies and organizations while meeting data residency, security, and compliance needs. We have deep expertise in protecting data and empowering customers around the globe to meet extensive security and privacy requirements, including offering the broadest set of compliance certifications and attestations in the industry.

With 54 regions announced worldwide, more than any other cloud provider, Microsoft’s global cloud infrastructure will connect the new regions in South Africa with greater business opportunity, help accelerate new global investment, and improve access to cloud and Internet services across Africa.

Accelerating digital transformation in Africa

As we execute our expansion strategy, we consider the demand for locally delivered cloud services and the opportunity for digital transformation in the market. According to a study from IDC, spending on public cloud services in South Africa will nearly triple over the next five years, and the adoption of cloud services will generate nearly 112,000 net-new jobs in South Africa by the end of 2022. The increased utilization of public cloud services and the additional investments into private and hybrid cloud solutions will enable organizations in South Africa to focus on innovation and building digital businesses at scale.

Nedbank, a leading African bank that services a diverse client base in South Africa and the rest of Africa, is pursuing a transformation strategy with the Azure cloud platform to enable its digital aspirations. Microsoft has had a long relationship with Nedbank which has culminated in enabling its migration to the cloud to help increase its competitiveness, agility, and customer focus. Azure also provides compliance technologies that assist Nedbank to increase data privacy and security which are primary concerns of its customers, regulators, and investors. Nedbank has adopted a hybrid and multi-vendor cloud strategy in which Microsoft is an integral partner.

The Peace Parks Foundation, in collaboration with Cloudlogic, uses Azure to rapidly deploy infrastructure and solutions in far-flung protected spaces as well as to compute a considerable volume of data around at-risk species and wildlife in multiple conservation areas spanning thousands of kilometers. In efforts to sustain the delicate ecosystem and keystone species, such as the black and white rhinoceros, Peace Parks Foundation processes up to tens of thousands of images captured monthly on wildlife cameras in remote areas to monitor possible poaching activity. In the future, Peace Parks will leverage the new cloud infrastructure for radio over Internet protocol, a high-tech solution to a low-tech problem, to improve radio communication over remote and isolated areas.

eThekwini water is a unit of the eThekwini Municipality in Durban, South Africa responsible for the provision of water and sanitation services critical for sustaining life for 3.5 million residents in a 2,000+ square kilometer service area. In partnership with Cloudlogic, eThekwini water is using Azure for critical application monitoring as well as site failover and disaster recovery initiatives. It’ll benefit from locally delivered cloud services to improve performance of real-time reporting and monitoring of water infrastructure 24 hours a day, seven days a week.

Empowering people and organizations across Africa

Microsoft has long been working to support organizations, local start ups, and NGOs in Africa that have the potential to solve some of the biggest problems facing humanity, such as the scarcity of water and food as well as economic and environmental sustainability.

In 2013, we launched Microsoft 4Afrika investing in start-ups, partners, small-to-medium enterprises, governments, and youth on the African continent. The program is focused on delivering affordable access to the Internet, developing skilled workforces, and investing in local technology solutions. Africa has the potential to help lead the technology revolution; therefore, Microsoft is empowering organizations and people to drive economic development, inclusive growth, and digital transformation. 4Afrika is Microsoft’s business and market development engine on the continent, which is preparing the market to embrace cloud technology.

We have also extended FarmBeats, an end-to-end approach to help farmers benefit from technology innovation at the edge, to Nairobi, Kenya. FarmBeats strives to enable data-driven farming as we believe that data, coupled with the farmer’s knowledge and intuition about his or her farm, can help increase farm productivity and reduce costs. The new effort in Nairobi will be focused on addressing the specific challenges of farming in Africa with the intent of expanding to other African countries.

Bringing the complete cloud to Africa

The new cloud regions in Africa are connected with Microsoft’s other regions via our global network, one of the largest and most innovative on the planet, which spans more than 100,000 miles (161,000 kilometers) of terrestrial fiber and subsea cable systems to deliver services to customers. We’ve expanded our network footprint to reach Egypt, Kenya, Nigeria, and South Africa and will be expanding to Angola. Microsoft is bringing the global cloud closer to home for African organizations and citizens through our trans-Arabian paths between India and Europe, as well as our trans-Atlantic systems including Marea, the highest-capacity cable to ever cross the Atlantic.

Azure is the first of Microsoft’s intelligent cloud services to be delivered from the new datacenters in South Africa. Office 365, Microsoft’s cloud-based productivity solution, is anticipated to be available by the third quarter of calendar year 2019, and Dynamics 365, the next generation of intelligent business applications, is anticipated for the fourth quarter.

Follow these links to learn more about the new cloud services in South Africa and the availability of Azure regions and services across the globe.

Try It Before You Buy It: myZerto Labs Program

The content below is taken from the original ( Try It Before You Buy It: myZerto Labs Program), to continue reading please visit the site. Remember to respect the Author & Copyright.

Zerto is launching a program that will allow potential customers to try out its IT Resilience Platform with no strings attached. The myZerto Labs… Read more at VMblog.com.

How to make people sit up and use 2-factor auth: Show ’em a vid reusing a toothbrush to clean the toilet, then compare it to password reuse

The content below is taken from the original ( How to make people sit up and use 2-factor auth: Show ’em a vid reusing a toothbrush to clean the toilet, then compare it to password reuse), to continue reading please visit the site. Remember to respect the Author & Copyright.

Education, education, education is key to security

RSA Despite multi-factor authentication being on hand to protect online accounts and other logins from hijackings by miscreants for more than a decade now, people still aren’t using it. Today, a pair of academics revealed potential reasons why there is limited uptake.…

Service Fabric Processor in public preview

The content below is taken from the original ( Service Fabric Processor in public preview), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft clients for Azure Event Hubs have always had two levels of abstraction. There is the low-level client, which includes event sender and receiver classes which allow for maximum control by the application, but also force the application to understand the configuration of the Event Hub and maintain an event receiver connected to each partition. Built on top of that low-level client is a higher-level library, Event Processor Host, which hides most of those details for the receiving side. Event Processor Host automatically distributes ownership of Event Hub partitions across multiple host instances and delivers events to a processing method provided by the application.

Service Fabric is another Microsoft-provided library, which is a generalized framework for dividing an application into shards and distributing those shards across multiple compute nodes. Many customers are using Service Fabric for their applications, and some of those applications need to receive events from an Event Hub. It is possible to use Event Processor Host within a Service Fabric application, but it is also inelegant and redundant. The combination means that there are two separate layers attempting to distribute load across nodes, and neither one is aware of the other. It also introduces a dependency on Azure Storage, which is the method that Event Processor Host instances use to coordinate partition ownership, and the associated costs.

Service Fabric Processor is a new library for consuming events from an Event Hub that is directly integrated with Service Fabric, it uses Service Fabric’s facilities for managing partitions, reliable storage, and for more sophisticated load balancing. At the same time it provides a simple programming interface that will be familiar to anyone who has worked with Event Processor Host. The only specific requirement that Service Fabric Processor imposes is that the Service Fabric application in which it runs must have the same number of partitions as the Event Hub from which it consumes. This allows a simple one on one mapping of Event Hub partitions to application partitions, and lets Service Fabric distribute the load most effectively.

Service Fabric Processor is currently in preview and available on NuGet at the “Microsoft.Azure.EventHubs.ServiceFabricProcessor” web page. The source code is on GitHub in our .NET Event Hubs client repository. You can also find a sample application available on GitHub.

From the developer’s point of view, there are two major pieces to creating an application using Service Fabric Processor. The first piece is creating a class that implements the IEventProcessor interface. IEventProcessor specifies methods that are called when processing is starting up for a partition (OpenAsync), when processing is shutting down (CloseAsync), for handling notifications when an error has occurred (ProcessErrorAsync), and for processing events as they come in (ProcessEventsAsync). The last one is where the application’s business logic goes and is the key part of most applications.

The second piece is integrating with Service Fabric by adding code to the application’s RunAsync method, which is called by Service Fabric to run the application’s functionality. The basic steps are:

  • Create an instance of EventProcessorOptions and set any options desired.

  • Create an instance of the IEventProcessor implementation. This is the instance that will be used to process events for this partition.

  • Create an instance of ServiceFabricProcessor, passing the options and processor objects to the constructor.

  • Call RunAsync on the ServiceFabricProcessor instance, which starts the processing of events.

Next steps

For more details follow our programming guide which is available on GitHub. Did you enjoy this blog post? Don’t forget to leave your thoughts and feedback in the comment section below. You can also learn more about Event Hubs by visiting our product page.

Western Digital launches SSDs for different enterprise use cases

The content below is taken from the original ( Western Digital launches SSDs for different enterprise use cases), to continue reading please visit the site. Remember to respect the Author & Copyright.

Last week I highlighted a pair of ARM processors with very different use cases, and now the pattern repeats as Western Digital, a company synonymous with hard-disk technology, introduces a pair of SSDs for markedly different use.

The Western Digital Ultrastar DC SN630 NVMe SSD and the Western Digital CL SN720 NVMe SSD both sport internally developed controller and firmware architectures, 64-layer 3D NAND technology and a NVMe interface, but that’s about where they end.

To read this article in full, please click here

The Open Compute Project is quickly gaining ground

The content below is taken from the original ( The Open Compute Project is quickly gaining ground), to continue reading please visit the site. Remember to respect the Author & Copyright.

Eight years ago, Facebook launched the Open Compute Project (OCP), an open-source hardware initiative to design the most energy-efficient server gear for massive, hyperscale data centers. The promise was flexibility of hardware and software and designs for greater power efficiency.

Very quickly, Intel, Rackspace, Goldman Sachs and Sun Microsystems’ co-founder Andy Bechtolsheim joined with Facebook to launch the OCP project, with Microsoft joining in 2014.

The project has hummed along quietly with no sales figures until now, thanks to supply chain market research specialists IHS Markit. It surveyed both Facebook, Microsoft, and Rackspace, as founding partners, and looked at sales to customers beyond those three.

To read this article in full, please click here

Eliminate Distractions on Websites With This Chrome Extension

The content below is taken from the original ( Eliminate Distractions on Websites With This Chrome Extension), to continue reading please visit the site. Remember to respect the Author & Copyright.

Whenever I sit down to complete a large task or a good amount of reading, I end up being exceptional susceptible to distractions.

Read more…

Exploring container security: How DroneDeploy achieved ISO-27001 certification on GKE

The content below is taken from the original ( Exploring container security: How DroneDeploy achieved ISO-27001 certification on GKE), to continue reading please visit the site. Remember to respect the Author & Copyright.

Editor’s note: Aerial data mapping company DroneDeploy wanted to migrate its on-premises Kubernetes environment to Google Kubernetes Engine—but only if it would pass muster with auditors. Read on to learn how the firm leveraged GKE’s native security capabilities to smooth the path to ISO-27001 certification.

At DroneDeploy, we put a lot of effort into securing our customers’ data. We’ve always been proud of our internal security efforts, and receiving compliance certifications validates these efforts, helping us formalize our information security program, and keeping us accountable to a high standard. Recently, we achieved ISO-27001 certification— all from taking advantage of the existing security practices in Google Cloud and Google Kubernetes Engine (GKE). Here’s how we did it.

As a fast-paced, quickly growing B2B SaaS startup in San Francisco, our mission is to make aerial data accessible and productive for everyone. We do so by providing our users with image processing, automated mapping, 3D modeling, data sharing, and flight controls through iOS and Android applications. Our Enterprise Platform provides an admin console for role-based access and monitoring of flights, mapped routes, image capture, and sharing. We serve more than 4,000 customers across 180 countries in the construction, energy, insurance, and mining industries, and ingest more than 50 terabytes of image data from over 30,000 individual flights every month.

Many of our customers and prospects are large enterprises that have strict security expectations of their third-party service providers. In an era of increased regulation (such as Europe’s GDPR law) and data security concerns, the scrutiny on information security management has never been higher.. Compliance initiatives are one piece of the overall security strategy that help us communicate our commitment to securing customer data. At DroneDeploy, we chose to start our compliance story with ISO-27001, an international information security standard that is for recognized across a variety of industries.

DroneDeploy’s Architecture: Google Kubernetes Engine (GKE)

DroneDeploy was an early adopter of Kubernetes, and we have long since migrated all our workloads from virtual machines to containers orchestrated by Kubernetes. We currently run more than 150,000 Kubernetes jobs each month with run times ranging from a few minutes to a few days. Our tooling for managing clusters evolved over time, starting with hand-crafted bash and Ansible scripts, to the now ubiquitous (and fantastic) kops. About 18 months ago, we decided to re-evaluate our hosting strategy given the decreased costs of compute in the cloud. We knew that managing our own Kubernetes clusters was not a competitive advantage for our business and that we would rather spend our energy elsewhere if we could.

We investigated the managed Kubernetes offerings of the top cloud providers and did some technical due diligence before making our selection—comparing not only what was available at the time but also future roadmaps. We found that GKE had several key features that were missing in other providers such as robust Kubernetes-native autoscaling, a mature control plane, multi-availability zone masters, and extensive documentation. GKE’s ability to run on pre-emptible node pools for ephemeral workloads was also a huge plus.

Proving our commitment to security hardening

But if we were going to make the move, we needed to document our information security management policies and process and prove that we were following best practices for security hardening.

Specifically, when it comes to ISO-27001 certification, we needed to follow the general process:

  1. Document the processes you perform to achieve compliance
  2. Prove that the processes convincingly address the compliance objectives
  3. Provide evidence that you are following the process
  4. Document any deviations or exceptions

While Google Cloud offers hardening guidance for GKE and several GCP blogs to guide our approach, we still needed to prove that we had security best practices in place for our critical systems. With newer technologies, though, it can be difficult to provide clear evidence to an auditor that those best practices are in place; they often live in the form of blog posts by core contributors and community leaders versus official, documented best practices. Fortunately, standards have begun to emerge for Kubernetes. The Center for Internet Security (CIS) recently published an updated compliance benchmark for Kubernetes1.11 that is quite comprehensive. You can even run automated checks against the CIS benchmark using the excellent open source project kube-bench. Ultimately though, it was the fact that Google manages the underlying GKE infrastructure that really helped speed up the certification process.  

Compliance with less pain thanks to GKE

As mentioned, one of the main reasons we switched from running Kubernetes in-house to GKE was to reduce our investment in manually maintaining and upgrading our Kubernetes clusters— including our compliance initiatives. GKE reduces the overall footprint that our team has to manage since Google itself manages and documents much of the underlying infrastructure. We’re now able to focus on improving and documenting the parts of our security procedures that are unique to our company and industry, rather than having to meticulously document the foundational technologies of our infrastructure.

For Kubernetes, here’s a snippet of how we documented our infrastructure using the four steps described above:

  1. We implemented security best practices within our Kubernetes clusters by ensuring all of them are benchmarked using the Kubernetes CIS guide. We use kube-bench for this process, which we run on our clusters once every quarter.
  2. A well respected third-party authority publishes this benchmark, which confirms that our process addresses best practices for using Kubernetes securely.
  3. We provided documentation that we assessed our Kubernetes clusters against the benchmark, including the tickets to track the tasks.
  4. We provided the results of our assessment and documented any policy exceptions and proof that we evaluated those exceptions against our risk management methodology.

Similarly to the physical security sections of the ISO-27001 standard, the CIS benchmark has large sections dedicated to security settings for Kubernetes masters and nodes. Because we run on GKE, Google handled 95 of the 104 line items in the benchmark applicable to our infrastructure. For those items that could not be assessed against the benchmark (because GKE does not expose the masters), we provided links to Google’s security documentation on those features (see Cluster Trust and Control Plane Security). Some examples include:

Beyond GKE, we were also able to take advantage of many other Google Cloud services that made it easier for us to secure our cloud footprint (although the shared responsibility model for security means we can’t rely on Google Cloud alone):

  • For OS level security best practices, we we able to document strong security best practices for our OS security because we use Google’s Container-Optimized OS (COS), which provides many security best practices by default by using things such as a read-only file system. All that was left for us to do was was follow best practices to help secure our workloads.
  • We use node auto-upgrade on our GKE nodes to handle patch management at the OS layer for our nodes. For the level of effort, we found that node auto-upgrade provides a good middle ground patching and stability. To date, we have not had any issues with our software as a result of node auto-upgrade.
  • We use Container Analysis (which is built into Google Container Registry) to scan for known vulnerabilities in our Docker images.
  • ISO-27001 requires that you demonstrate the physical security of your network infrastructure. Because we run our entire infrastructure in the cloud, we were able to directly rely on Google Cloud’s physical and network security for portions of the certification (Google Cloud is ISO-27001 certified amongst other certifications).

DroneDeploy is dedicated to giving our customers access to aerial imaging and mapping technologies quickly and easily. We handles vast amounts of sensitive information on behalf of our customers, and we want them to know that we are following best security practices even when the underlying technology gets complicated, like in the case of Kubernetes. For DroneDeploy, switching to GKE and Google Cloud has helped us reduce our operational overhead and increased the velocity with which we achieve key compliance certifications. To learn more about DroneDeploy, and our experience using Google Cloud and GKE, feel free to reach out to us.

A startpage to help you find data to hoard

The content below is taken from the original ( A startpage to help you find data to hoard), to continue reading please visit the site. Remember to respect the Author & Copyright.

submitted by /u/slashx25 to r/DataHoarder
[link] [comments]

I wrote an open source tool that can download your data from all your online accounts

The content below is taken from the original ( I wrote an open source tool that can download your data from all your online accounts), to continue reading please visit the site. Remember to respect the Author & Copyright.

I needed something that downloads my data from my online accounts and aggregates it all into a single timeline. So, say hello to Timeliner: https://github.com/mholt/timeliner

A few years ago, I realized that all my photos go straight from my phone into Google Photos, leaving me with no local copy of them, so if anything ever happened to my Internet connection (which happens a lot — I’m from Iowa, where snow took out our satellite connection all the time) or my Google account, all those memories would be gone, or at least inaccessible to me. That is unacceptable to me in the long term.

Timeliner is designed to be able to work with pretty much any kind of data source / online service, and it supports two modes of getting data:

  • An API
  • Importing from a file (like from Google Takeout, or archives exported from a service)

There’s no way to visualize the timeline yet, but if you’re handy with SQLite, you can use table viewers and/or run SQL queries to go through the data. (The most important thing is that you at least have the data!)

There are instructions on the wiki if you want to contribute more data sources or improve on existing ones.

It’s still in it’s very early stages, but it’s at the point now where I’m trusting my home server to run it every week.

Feel free to give it a spin. I hope you’ll find it useful!

submitted by /u/mwholt to r/DataHoarder
[link] [comments]

Class schedules on Azure Lab Services

The content below is taken from the original ( Class schedules on Azure Lab Services), to continue reading please visit the site. Remember to respect the Author & Copyright.

Classroom labs in Azure Lab Services make it easy to set up labs by handling the creation and management of virtual machines and enabling the infrastructure to scale. Through our continuous enhancements to Azure Lab Services, we are proud share that the latest deployment now includes added support for class schedules.

Schedules management is one of the key features requested by our customers. This feature helps teachers easily create, edit, and delete schedules for their classes. A teacher can set up a recurring or a one-time schedule and provide a start, end date, and time for the class in the time zone of choice. Schedules can be viewed and managed through a simple, easy to use calendar view.

Screenshot of Azure Lab Services scheduling calendar

Students virtual machines are turned on and ready to use when a class schedule starts and will be turned off at the end of the schedule. This feature helps limit the usage of virtual machines to class times only, thereby helping IT admins and teachers manage costs efficiently.

Schedule hours are not counted against quota allotted to a student. Quota is the time limit outside of schedule hours when a student can use the virtual machine.

With schedules, we are also introducing no quota hours. When no quota hours are set for a lab, students can only use their virtual machines during scheduled hours or if the teacher turns on virtual machines for the students to use.

Screenshot of quota per user selection

Students will be able to clearly see when a lab schedule session is in progress on their virtual machines view.

Screenshot of my virtual machines dashboard in Azure Lab Services

You can learn more about how to use schedules in our documentation, “Create and manage schedules for classroom labs in Azure Lab Services.” Please give this feature a try and provide feedback at Azure Lab Services UserVoice forum. If you have a questions, please post it on Stack Overflow.

Virgin Galactic sends its first passenger to the edge of space

The content below is taken from the original ( Virgin Galactic sends its first passenger to the edge of space), to continue reading please visit the site. Remember to respect the Author & Copyright.

Virgin Galactic sent its first test passenger into sub-space today. The company's chief astronaut instructor Beth Moses accompanied two pilots on a flight 55.85 miles above the Earth, just a few miles below the internationally recognized space bounda…

Guest Mode vs Incognito Mode in Chrome browser

The content below is taken from the original ( Guest Mode vs Incognito Mode in Chrome browser), to continue reading please visit the site. Remember to respect the Author & Copyright.

guest mode chrome

guest mode chromeGoogle Chrome is a very versatile browser. With the number of features and compatibility it delivers to a user, this makes it really helpful and moreover an all in one package. Features like support for a variety of extensions, themes, […]

This post Guest Mode vs Incognito Mode in Chrome browser is from TheWindowsClub.com.

certifytheweb (4.1.4)

The content below is taken from the original ( certifytheweb (4.1.4)), to continue reading please visit the site. Remember to respect the Author & Copyright.

The GUI to manage free letsencrypt.org https certificates for Windows