Start Me Up: What Has The Windows 95 Desktop Given Us 25 Years Later?

The content below is taken from the original ( Start Me Up: What Has The Windows 95 Desktop Given Us 25 Years Later?), to continue reading please visit the site. Remember to respect the Author & Copyright.

We’ve had something of an anniversary of late, and it’s one that will no doubt elicit a variety of reactions from our community. It’s now 25 years ago that Windows 95 was launched, the operating system that gave the majority of 1990s PC users their first taste of a desktop-based GUI and a 32-bit operating system.

To the strains of the Rolling Stones’ Start me up, Microsoft execs including Bill Gates himself jubilantly danced on stage at the launch of what was probably to become the company’s defining product, perhaps oblivious to the line “You make a grown man cry” which maybe unwittingly strayed close to the user experience when faced with some of the software’s shortcomings.

Its security may seem laughable by the standards of today and the uneasy marriage of 16-bit DOS underpinning a 32-bit Windows operating system was clunky even in its heyday, but perhaps now is the best time to evaluate it unclouded by technical prejudice. What can we see of Windows 95 in the operating systems we use today, and thus from that can we ask the question: What did Windows 95 get right?

For Most People, This Was Where It All Started

Windows 95's desktop
A test of the legacy of Windows 95’s desktop comes in how intuitive it still is for users of a 2020-era GUI OS.

Windows 95 was by no means the first operating system to use a desktop based GUI. While earlier Windows GUIs had been more akin to graphical launchers there had been a succession of other GUI-based computers since their Xerox PARC ancestor, so Macintosh and Amiga owners among others could have been forgiven for wondering why it took Redmond so long to catch up. But for all the clamour from the 68k-based fans, the indulgent smiles from X window users on UNIX workstations in industry and universities, and the as yet unfulfilled desktop fantasies of 1995’s hardy band of GNU/Linux users, the fact remains that for the majority of the world’s desktop computer users back then it would be the Microsoft Sound that heralded their first experience of a modern GUI operating system.

We’re lucky here in 2020, to have such computing power at our fingertips that we can run in-browser simulations or even outright emulations running real code of most of the 1990s desktops. WIndows 95 can be directly compared with its predecessor, and then with its contemporaries such as Macintosh System 7 and Amiga Workbench 3.1. Few people would have had the necessary four machines side-by-side to do this back then, so paging between tabs their differences and relative shortcomings become rapidly obvious. In particular the menu and windowing systems of the Mac and Amiga desktops which seemed so advanced when we had them in front of us start to feel cumbersome and long-winded in a way the Windows 95 interface for all its mid-90s Microsoft aesthetic, just doesn’t.

Using Amiga Workbench again after 25 years provides an instant reminder that an essential add-on to the Workbench disk back in the day was a little utility that gave window focus to mouse position, brought right-click menus up at the mouse pointer position, and brought focused windows to the front. Good GUIs don’t need to have their shortcomings fixed with a utility to stop them being annoying, they — to borrow a phrase from Apple themselves — just work. Right-click context menus at the mouse pointer position, the Start menu bringing access to everything into one place, and the taskbar providing an easy overview of multitasking, they were none of them earth-shattering, but together they set the Windows GUI as the one that became a natural environment for users.

Finding the Very Long Shadow of ’95 today

If you miss '95, ReactOS is probably the closest you can get here in 2020.
If you miss ’95, ReactOS is probably the closest you can get here in 2020.

Returning to the present and Windows 10, the spiritual if not codebase descendant of Windows 95, has a Start menu and a task bar that will be visibly familiar to a user from 25 years before. They were so popular with users that when Windows 8 attempted to remove them there was something of a revolt, and Microsoft returned them to later versions. The same features appear in plenty of desktop environments in other operating systems including GNU/Linux distributions, indeed it can be found on my laptop running an up-to-date Linux Mint. Arguments will probably proceed at length whether it or the dock-style interface found on NeXT, MacOS, and plenty of other GNU/Linux distros are better, but this legacy of Windows 95 has proved popular enough that it is likely to remain with us for the forseeable future.

It’s odd, sitting down for this article at a Windows 95 desktop for the first time in over two decades. It’s so familiar that despite my having not possessed a Windows desktop for around a decade I could dive straight into it without the missteps that I had when revisting Amiga Workbench. It’s almost a shock then to realise that it’s now a retrocumputing platform, and there’s little in my day-to-day work that I could still do on a Windows 95 machine. Perhaps it’s best to put it down before I’m reminded about Blue Screens Of Death, about driver incompatibilities, or Plug and Pray, and instead look at its echoes in my modern desktop. Maybe it did get one or two things right after all.

It's now safe to turn off your computer, the Windows 95 end screen.

Header image: Erkaha / CC BY-SA 4.0

Why Unifying Payments Infrastructure Will Boost Financial Inclusion

The content below is taken from the original ( Why Unifying Payments Infrastructure Will Boost Financial Inclusion), to continue reading please visit the site. Remember to respect the Author & Copyright.

The latest episode of Block Stars is the conclusion of current Ripple CTO David Schwartz’s two-part conversation with former Ripple CTO, Stefan Thomas. Stefan is now the founder and CEO of Coil, a Web Monetization service that streams payments to publishers and creators based on the amount of time Coil members around the world spend enjoying their content.  

Processing high volumes of these micropayments is not cost-effective using existing financial infrastructure, especially when it comes to cross-border transfers. During his time at Ripple, Stefan began working on Interledger Protocol (ILP) as a way to provide faster and cheaper global remittances. He wanted to create a unified payments infrastructure, much like how the internet unified communications.

“Pre-internet you had all these different communications companies that had their own cables and their own wires and their own satellites,” explains Stefan. “The internet is a generic communications infrastructure. You can use it for any kind of communication you want. I think about ILP [as a] general infrastructure for the movement of value.”

ILP allows global businesses, remittance services and payments companies to send payments across different ledgers. The open architecture enables interoperability for any value transfer system. More efficient and affordable payments also makes them more accessible, bringing financial inclusion to more people around the world.

“For the majority of people, a centralized system is great,” says Stefan. “But if you’re part of a marginalized population…that central authority is not going to care about your needs very much. A decentralized system is better at serving more obscure use cases because people can take more of a self-help approach.”

Much of Stefan’s interest in financial inclusion was inspired by his time as freelance web designer. Getting paid by global clients and, in turn, paying the subcontractors he employed was cumbersome and expensive—especially when contrasted with the hyper-efficient way everyone communicated over the internet. After a meeting with potential partners, he realized that this large-scale issue was faced by companies across all industries.

Creating greater global access with a unified payments infrastructure is the ultimate long-term benefit of ILP. Though he concedes that some form of universal approach is inevitable, Stefan’s motivation for leading the charge is to make sure the resulting system includes as many people as possible.

“It’s great if I can acquire the skills to start a business,” he concludes. “It’s great if I can communicate with my potential customers. But if I can’t get the financing for my business and I can’t get my customers to pay me, there are still pieces missing. For me, [ILP is] completing that picture…interact[ing] with people in other countries economically. I expect that to be hugely empowering for a lot of people.”

Listen to part two of David and Stefan’s conversation on episode 10 of the Block Stars for more on how and why Stefan developed ILP and to discover the critical piece of advice that Ripple’s outgoing CTO passed on to his successor.

The post Why Unifying Payments Infrastructure Will Boost Financial Inclusion appeared first on Ripple.

How to view Security Questions and Answers for Local Account in Windows 10

The content below is taken from the original ( How to view Security Questions and Answers for Local Account in Windows 10), to continue reading please visit the site. Remember to respect the Author & Copyright.

view security questions and answers for windows 10 local accountWindows 10 comes with a built-in feature to add security questions for a local account. It helps to reset your local account password in case […]

This article How to view Security Questions and Answers for Local Account in Windows 10 first appeared on TheWindowsClub.com.

WinEd 3.26 released

The content below is taken from the original ( WinEd 3.26 released), to continue reading please visit the site. Remember to respect the Author & Copyright.

After a decade of gathering dust that ended when Steve Fryatt took a look at the application, releasing a new version last month, WinEd has been updated again. This time,… Read more »

Using Trainable Classifiers to Assign Office 365 Retention Labels

The content below is taken from the original ( Using Trainable Classifiers to Assign Office 365 Retention Labels), to continue reading please visit the site. Remember to respect the Author & Copyright.

Training a classifier
Training a classifier

The Challenge of Retention Processing

Retention labels control how long items remain in an Office 365 workload and what happens once the retention period expires. Labels can be assigned manually, but the success of manual labeling depends on users understanding how to make the best choice from the available retention labels. Sometimes the choice is clear, as in a document which obviously contains information that should be kept, and sometimes it’s not.

Auto-label policies try to solve the problem by looking for documents and messages which match patterns. For example, if a document holds four instances of a credit card number, it should be assigned the Financial Data label. On the other hand, if a document holds personal information like a social security number, it should get the PII Data label.

Auto-label policies work well when items hold content that is identifiable by matching against the 100-plus sensitive data types defined by Microsoft or a keyword search for a specific phrase (like “project Contoso”). They are especially valuable when organizations have large numbers of existing documents to be labeled. Computers are better at repetitive tasks than humans, and it makes sense to deploy intelligent technology to find and label documents at scale.

That is, if you can be sure that the documents you want to label can be accurately located. Sensitive data types and keyword searches do work, but there’s always likely to be some form of highly-specific information in an organization that searching by data type or keyword doesn’t quite suit. Using a trainable classifier might help in these situations.

Standard Classifiers and Licensing

A trainable classifier is a digital map of a type of document (Office 365 has supported digital fingerprints extracted from template documents for several years). The classifier is trainable because it learns by observing samples of the documents you want to process plus some examples of non-matching items until the predictions made by the classifier are accurate enough for it to be used.

Microsoft has a set of classifiers for use in compliance features, like the Profanity or Threat classifiers used in communication compliance policies. As the names suggest, these classifiers identify items containing profane or threatening text. Microsoft created the classifiers by training them with large numbers of text examples for the classifiers to learn the essential signs of what might constitute profane or threatening language.

A preview allowing tenants to create and use trainable classifiers in Office 365 is available in the data classification section of the Microsoft 365 compliance center. Like all auto-label functionality, when trainable classifiers are generally available, they’ll need Office 365 or Microsoft 365 E5 compliance licenses.

Creating a Trainable Classifier

To create a trainable classifier, you’ll need at least 250 samples of the type of document you eventually want to use the classifier to locate (more is better). The documents can’t be encrypted, must be in English (for now), and be stored in SharePoint Online folders that only hold items to be used for training.

To test things out, I created a classifier for Customer Invoices using ten years’ worth of the Excel worksheets I use to generate invoices. The steps I took were:

  • Create a folder in a SharePoint Online site and copied customer invoices to the folder. The training model is built from these documents.
  • Create the new trainable classifier in the Microsoft 365 compliance center by giving the classifier a name and telling it the folder holding the seed documents.
  • Wait for the seed documents to be processed to create the training model (this can take 24 to 48 hours). After indexing the folder, the new classifier will examine the seed documents to understand their characteristics. In my case, what makes an invoice? For instance, the classifier will learn that invoices have a customer name, the name of my company, a date, some lines of billing information, and instructions how to pay. Although the seed documents contain different information, the essential structure of the documents are the same, and this is what helps the classifier learn how to recognize future documents of the same type.
  • Go through a review process (batches of 30 items) to check the predictions made by the classifier. A human review tells the classifier when it is right or wrong (Figure 1). The training model is updated after you complete a batch and applied to the next batch of reviews.
Image 1 Expand
Figure 1: Training a classifier (image credit: Tony Redmond)

Publish the Classifier

As testing proceeds, the accuracy of the classifier should improve as it processes more seed documents. Eventually the accuracy will get good enough (Figure 2) and you’ll be able to publish the classifier to make it available to auto-label policies. Microsoft says that they have seen successful classifiers at 88% accuracy, and providing that the classifier is stable and predictable at that point, it’s good to go.  It’s important that you don’t rush to publish until the classifier is thoroughly trained because you can’t force the classifier to go through extra training after publication.

Image 2 Expand
Figure 2: Ready to publish a trainable classifier

Two steps remain before you can use the trainable classifier. First, you create a suitable retention label for the classifier. This can be an existing label, but you might want to create a new label for exclusive use with the classifier.

Second, you create an auto-label policy to apply the chosen label when the trainable classifier matches an item. The policy is built from the label, the classifier (Figure 3), and the locations where you want auto-labeling to happen. This can be all SharePoint sites and mailboxes in the tenant or just a selected few. My recommendation is to start with one or two sites and monitor progress until you’re happy to use the classifier everywhere.

Image 3 Expand
Figure 3: Choosing a trainable classifier in an auto-label policy (image credit: Tony Redmond)

Differentiating Between SharePoint Sites and SharePoint Sites

For some reason, auto-label policies differentiate between “regular” SharePoint sites and those connected to Microsoft 365 groups. Make sure that you select the right category: I spent a week or so wondering why a policy wasn’t working only to discover that it was because I had input the URL of the site belonging to a group (under SharePoint sites) instead of the group name (under Microsoft 365 groups). I don’t understand why Microsoft differentiates regular and group-connected sites.

It’s possible that you might want to apply labels only to documents in a site belonging to a group and not to messages in the group mailbox, but there doesn’t seem to be a good way to do this in the current setup.

Checking Classifier Effectiveness

As noted above, once published you can’t retrain a classifier, but you can check what it’s doing by monitoring items labeled by the auto-label policy. Remember that auto-label policies will not process items that already been assigned a label.

The simplest test is to examine the retention labels on documents which you expect to be auto-labeled. If this is the case, then there’s a reasonable chance that the classifier is working as expected. To confirm that this is true, the activity explorer in the data classification section of the Microsoft 365 compliance center (Figure 4) gives an insight into the application of retention labels and sensitivity labels.

Image 4 Expand
Figure 4: Documents show up as being auto-labeled (image credit: Tony Redmond)

You can also check by looking for audit records in the Office 365 audit log. Records for ComplianceSettingChanged operations are generated when retention labels are applied, but only for SharePoint Online and OneDrive for Business documents.

Black Box Processing

Checking outputs from a process is a good way of knowing if the process works, but it’s not as satisfactory as it would be if greater visibility existed into aspects of auto-label policies such as:

  • When the auto-label policy is processed against the selected locations.
  • What documents are classified (and documents that match but are not labeled because they already have a label).
  • Any errors which occur.

Ideally, an administrator should be able to view an auto-label policy and see details of recent runs. It would also be good if the administrator could force the policy to run against one or more selected locations, much in the same way that a site owner can force SharePoint Online to reindex a site.

Good Application of Machine Learning

Even though the implementation of trainable classifiers in auto-label policies has some rough edges, I like the general thrust of what Microsoft is trying to do. Being able to build tenant-specific classifiers based on real-life information is goodness. Casting more light into how classifiers work when used in auto-label policies would make these policies so much sweeter.

The post Using Trainable Classifiers to Assign Office 365 Retention Labels appeared first on Petri.

Should You Be Using Cloud-Based Digital CAD Platforms?

The content below is taken from the original ( Should You Be Using Cloud-Based Digital CAD Platforms?), to continue reading please visit the site. Remember to respect the Author & Copyright.

For years, desktop-based, on-site CAD tools have been the standard for designers and engineers working in various industries. These tools are… Read more at VMblog.com.

Global heatmap of cheater density says Brazil is the worst at video games, but there’s no data on China

The content below is taken from the original ( Global heatmap of cheater density says Brazil is the worst at video games, but there’s no data on China), to continue reading please visit the site. Remember to respect the Author & Copyright.

Script kiddies run rampant in Minecraft

Ever torn your keyboard from the desk and flung it across the room, vowing to find the “scrub cheater” who ended your run of video-gaming success? Uh, yeah, us neither, but a study into the crooked practice might help narrow down the hypothetical search.…

Hidden Windows Terminal goodies to check out: Retro mode that emulates blurry CRT display – and more

The content below is taken from the original ( Hidden Windows Terminal goodies to check out: Retro mode that emulates blurry CRT display – and more), to continue reading please visit the site. Remember to respect the Author & Copyright.

Don’t worry, there are some useful features in the update too

Microsoft has bequeathed new capabilities to both the released and the preview versions of Windows Terminal, a feature-laden alternative to the command prompt.…

How to Identify Unsupported Teams Devices using Endpoint Manager

The content below is taken from the original ( How to Identify Unsupported Teams Devices using Endpoint Manager), to continue reading please visit the site. Remember to respect the Author & Copyright.


At the end of June Microsoft announced that they would retire Teams mobile support for Android 4.4 (KitKat) by September this year, which is just around the corner.

This in general is good, because if you use Intune to manage your devices today, then you should be planning to move to Android Enterprise management features that require Android 5.0 or higher – or ideally Android 6.0 and above.

However, moving to a newer version of Android isn’t straightforward because unlike Apple’s iOS, which has a clear-cut set of definitions for which devices will get OS updates, the decision to update an Android OS rests with various device manufacturers, and in many cases wireless carriers.

Devices affected by this change are smartphones and tablets that are typically at least five years old and include older devices such as the Samsung Galaxy S3. Most devices from around 2015 onward received updates to Android 5 (Lollipop) and Android 6 (Marshmallow); dedicated Teams phones are not affected by this change.

Finding older devices enrolled with Intune

If you enroll devices with Intune, then these will be straightforward to identify. The Intune Company Portal app has only supported Android 5.0 and higher since January 2020, therefore it’s unlikely you will have received any new enrolments since then – but devices with the app already installed can continue to enrol today.

To find these devices and export a list, visit the Microsoft Endpoint Manager Admin Center and navigate to Devices>Android Devices. You will see a list of all Android devices currently enrolled:

Image #1 Expand
Figure 1: Viewing Android 4.4 devices in the MEM portal (image credit: Steve Goodman)

If you need to filter the list of enrolled Android devices running the affected version, enter 4.4 into the Search by.. box in the UI. You can the use Export to gain a list of all enrolled devices.

Finding all older devices using Azure AD Sign-In logs

If you allow mobile devices to use Teams without Intune enrollment, or do not use Intune, then you will need to use a different method to discover devices running older versions of Android.

The Azure AD admin center provides the ability to review, filter and export sign-ins for the last month, which should give you a good indication of all older Android devices connected to your environment.

To find these devices, visit the Azure AD admin center and navigate to Sign-ins. Choose the time period you wish to filter by, then filter by Operating system starts with entering the value “Android 4” combined with Application starts with, using the value “Microsoft Teams”.

This will show a comprehensive list of all sign-ins to the Microsoft Teams application from Android 4.4 devices:

Image #2 Expand
Figure 2: Using Azure AD Sign-In logs to find Android 4.4 devices using Teams (image credit: Steve Goodman)

Because these are sign-in logs, you will see an element of duplication, which will make it harder to identify individual devices. Therefore, use the Download option to export a CSV report of the filtered sign-ins. You can then open this in Excel, and use the Remove Duplicates, using Username as the key to reduce this list to one line per user:

Image #3 Expand
Figure 3: Using Excel to provide a consolidated list of Android 4.4 Teams clients (image credit: Steve Goodman)

After identifying users affected, you will have several options. For BYOD (Bring your own device) scenarios, it is unlikely you will need to provide a replacement, but you will need to inform users that Teams is expected to cease working on their device.

For corporate-owned devices running older versions that you replace, it is worth ensuring any replacement devices you issue will continue to receive security updates as well as Android OS updates.

The post How to Identify Unsupported Teams Devices using Endpoint Manager appeared first on Petri.

Setting Up Virtual Lab Environments Using Azure Lab Services

The content below is taken from the original ( in /r/ AZURE), to continue reading please visit the site. Remember to respect the Author & Copyright.

https://ift.tt/32ENDzA

Support to assess physical, AWS, GCP servers now generally available

The content below is taken from the original ( Support to assess physical, AWS, GCP servers now generally available), to continue reading please visit the site. Remember to respect the Author & Copyright.

The assessments offer Azure suitability analysis, migration cost planning and performance-based rightsizing.

Immersive Reader is now generally available

The content below is taken from the original ( Immersive Reader is now generally available), to continue reading please visit the site. Remember to respect the Author & Copyright.

Immersive Reader is an Azure Cognitive Service for developers who want to embed inclusive capabilities into their apps for enhancing text reading and comprehension for users regardless of age or ability.

How Parcel Shuttle makes last-mile delivery more eco-friendly and drivers happier

The content below is taken from the original ( How Parcel Shuttle makes last-mile delivery more eco-friendly and drivers happier), to continue reading please visit the site. Remember to respect the Author & Copyright.

Editor’s note: Today’s post comes from Simon Seeger, founder of Parcel Shuttle, a GLS Group backed Berlin parcel delivery solution that’s rethinking the sector with a “smart microgrid” system which enables it to reduce the carbon footprint of delivery runs while offering an opportunity to provide a flexible income for drivers. 

When we launched Parcel Shuttle last year, I was confident that Google Maps Platform would help us bring eco-friendly parcel delivery to Berlin through precise navigation. Never did I imagine that it would also lend a helping hand in the most precious delivery of all: a baby.

I’ll never forget the day a driver called our dispatch office saying he couldn’t continue his shift because his wife had gone into labor. The team scrambled to remotely adjust his navigation, redirecting his route to his home, then the hospital. Next, they used route optimization and live traffic prediction to get the couple to the delivery room as fast as possible.

The day little Vivien was born safely, weighing 7.16 lbs, brought great joy to all of us at Parcel Shuttle. It also reminded us why we’re in this business in the first place. Delivering essentials in the greenest possible way, while bringing happiness to our drivers through fairness and work flexibility.

A green and fair vision for parcel delivery

We’re disrupting the parcel sector through a “smart microgrid” concept that drastically reduces the distance required to deliver parcels, and brings drivers more flexibility by keeping delivery runs within their own neighborhood. All of our drivers decide exactly when they want to work, enabling them, for example, to attend dance classes in the morning and work a two-hour shift for us in the afternoon.

Parcel Shuttle Smart Microgrid

In parcel delivery, fleets of vans normally fan out from warehouses outside the city to drop off parcels in town before returning to the depot. 60 percent of last mile delivery kilometers are made up of additional mileage and man-hours spent making the back-and-forth depot journey, not to mention criss-crossing town for deliveries. We decided to turn the process on its head.

In our model, one large truck leaves the warehouse on a “milk run” to drop parcels into delivery cars parked within each Berlin micro-grid, walking distance from drivers’ homes. Then, the driver just walks out the door, finds the car loaded with parcels, and delivers the goods in an efficient loop around their neighborhood, reducing the burden on drivers, roads, and air quality.

Our innovation may sound simple, but execution is a huge challenge. To make it work, we need cutting-edge Google Maps Platform navigation, route sequencing, and geocoding solutions to quickly guide freelance drivers to delivery points. 

Delivering goods to the right address as efficiently as possible

Imagine having to deliver a parcel with invalid address inputs for just about everything: company name, street number, postal code, and more. In a town like Berlin, it’s like trying to find a needle in a haystack. And we can’t blame the client for giving us the wrong address. We just have to deliver. Period. If not, we lose that business.

In order to crunch reams of location data to fix incorrect address inputs, enabling our drivers to find the right delivery point with minimum fuss we use the Geocoding API. Working with the Geocoding API, we can get a completely garbled address come in, and amazingly, the correct address pops out. It’s a real life-saver.

The right address is critical, but drivers also need to determine how to get there. We are able to lead our novice drivers to each destination, as if they’ve been doing this job for decades with the Maps JavaScript API.

Parche Shuttle Mobile Navigation

Parcel Shuttle Navigation

We quickly learned that calculating distances is only one part of the route optimization solution. We need to analyze live traffic conditions, such as bottlenecks, roadworks, and red lights, to deliver the best possible route at any given moment and to tell drivers the most efficient order in which to make multiple stops. This is where using the Maps JavaScript API and the API’s Traffic Layer helps us out.

The results have been encouraging. We’ve achieved more than 60 percent reduction in road covered in the city center, thanks to a combination of our own proprietary Android application and Google Maps Platform tools. We’ve also gained 25 percent time savings in last-mile delivery via Google Maps Platform guided solutions.

Serving Berlin amid the COVID-19 storm

The COVID-19 situation tested our business model due to a huge spike in demand. The lockdown not only led to families in Berlin ordering most of their everyday needs online, it also meant that small businesses needed to source supplies and spare parts through parcel deliveries instead of buying them onsite from wholesalers. 

During lockdown, we experienced a 25 percent rise in delivery points. Increasing stops by that amount greatly complicates routing calculations for each run. Basically, we have a quarter more places to visit in the same time window as before. It could have been a nightmare without state-of-the-art route optimization.

We’re relieved that our delivery model has held strong during COVID-19, and proud that we’ve been able to serve the Berlin community. We experienced almost no stress to our eco-friendly model and commitment to timely delivery. This just wouldn’t have been possible without the powerful navigation and geocoding solutions of Google Maps Platform.

Global expansion with Google solutions

We see our growth potential as unlimited due to the combination of a simple yet unique business model, and Google Maps Platform tools to help make our model work. Our next steps will be bringing our platform to more German cities, then to the world.

We’re excited about greening parcel delivery around the world. And how’s baby Vivien doing closer to home? Just as fine as can be. We keep her picture up on our office wall as a happy reminder that our mission is to bring joy, in all its forms, to people’s doorsteps.

For more information on Google Maps Platform, visit our website.

LG’s transparent OLED displays are on subway windows in China

The content below is taken from the original ( LG’s transparent OLED displays are on subway windows in China), to continue reading please visit the site. Remember to respect the Author & Copyright.

LG is bringing transparent OLED displays to subways in Beijing and Shenzhen. The 55-inch, see-through displays show real-time info about subway schedules, locations and transfers on train windows. They also provide info on flights, weather and the ne…

Windows 10 can run apps from your Samsung phone

The content below is taken from the original ( Windows 10 can run apps from your Samsung phone), to continue reading please visit the site. Remember to respect the Author & Copyright.

You’ll soon have access to your phone’s apps on your PC — if you have the right phone, at least. Microsoft is rolling out a Windows 10 Your Phone update with support for running mobile apps on your desktop, as promised when Samsung revealed the Galax…

I figured out how to log into an Azure VM using Azure AD credentials. This is not well documented.

The content below is taken from the original ( in /r/ AZURE), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft uses AI to boost its reuse, recycling of server parts

The content below is taken from the original ( Microsoft uses AI to boost its reuse, recycling of server parts), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft is bringing artificial intelligence to the task of sorting through millions of servers to determine what can be recycled and where.

The new initiative calls for the building of so-called Circular Centers at Microsoft data centers around the world, where AI algorithms will be used to sort through parts from decommissioned servers or other hardware and figure out which parts can be reused on the campus.

Microsoft says it has more than three million servers and related hardware in its data centers, and that a server’s average lifespan is about five years. Plus, Microsoft is expanding globally, so its server numbers should increase.

To read this article in full, please click here

Cloud management is changing rapidly with the times: Here’s what you need to know

The content below is taken from the original ( Cloud management is changing rapidly with the times: Here’s what you need to know), to continue reading please visit the site. Remember to respect the Author & Copyright.

By Jesse Stockall, Chief Architect of Cloud Management at Snow It’s a fast-changing world. But at the same time, we’re also falling back on the tried Read more at VMblog.com.

IBM details next-gen POWER10 processor

The content below is taken from the original ( IBM details next-gen POWER10 processor), to continue reading please visit the site. Remember to respect the Author & Copyright.

IBM on Monday took the wraps off its latest POWER RISC CPU family, optimized for enterprise hybrid-cloud computing and artificial intelligence (AI) inferencing, along with a number of other improvements.

Power is the last of the Unix processors from the 1990s, when Sun Microsystems, HP, SGI, and IBM all had competing Unixes and RISC processors to go with them. Unix gave way to Linux and RISC gave way to x86, but IBM holds on.

This is IBM’s first 7-nanometer processor, and IBM claims it will deliver an up-to-three-times improvement in capacity and processor energy efficiency within the same power envelope as its POWER9 predecessor. The processor comes in a 15-core design (actually 16-cores but one is not used) and allows for single or dual chip models, so IBM can put two processors in the same form factor. Each core can have up to eight threads, and each socket supports up to 4TB of memory.

To read this article in full, please click here

Samsung Pay Card launches in UK, powered by fintech Curve

The content below is taken from the original ( Samsung Pay Card launches in UK, powered by fintech Curve), to continue reading please visit the site. Remember to respect the Author & Copyright.

Samsung Pay Card, a new Mastercard debit card from the mobile handset giant, has launched in the U.K. today.

Powered by London-based fintech Curve, it lets you consolidate all of your other existing bank cards into a single card and digital wallet, making it easy to manage your money and, of course, use Samsung Pay more universally.

Unsurprisingly, Samsung Pay Card users will also get access to other Curve features. They include a single view of your card spending that is entirely agnostic to where your money is stored, as well as instant spend notifications, cheaper FX fees than your bank typically charges, peer-to-peer payments from any linked bank account and the ability to switch payment sources retroactively.

The latter — dubbed “Go Back in Time” — lets you move transactions from one card to another after they’ve been made, meaning that you have more flexibility and control of your spending. For example, perhaps you made a large purchase from one of your linked debit cards but for cash flow cashflow purposes decide it would be better charged to your credit card. That’s possible to do using Curve and now Samsung Pay Card.

In addition, as an introductory offer, Samsung Pay Card users get 1% cashback at selected merchants, and exclusive to the Samsung Pay Card, can also earn 5% on all purchases at Samsung.com.

Comments Conor Pierce, corporate vice president Corporate Vice-President of Samsung UK & Ireland: “At Samsung we believe in the power of innovation and, through our partnership with Curve, the Samsung Pay Card brings a series of pioneering features that will change the way that our customers manage their spending, with their Samsung smartphone and smartwatch at the heart of it. This is the future of banking and we look forward to continuing this journey with our customers.”

Pass that Brit guy with the right-hand drive: UK looking into legalising automated lane-keeping systems by 2021

The content below is taken from the original ( Pass that Brit guy with the right-hand drive: UK looking into legalising automated lane-keeping systems by 2021), to continue reading please visit the site. Remember to respect the Author & Copyright.

First step to self-driving vehicles on British roads

RoTM Self-driving vehicles have taken a modest step forward towards legality, with the UK’s Department for Transport (DfT) launching a Call for Evidence that will determine the safety and efficacy of Automated Lane Keeping Systems (ALKS) with an aim to legalise the technology by spring 2021.…

Tech At Home Winners Who Made the Best of their Quarantine

The content below is taken from the original ( Tech At Home Winners Who Made the Best of their Quarantine), to continue reading please visit the site. Remember to respect the Author & Copyright.

Back in April we challenged hackers to make the best of a tough situation by spending their time in isolation building with what they had laying around the shop. The pandemic might have forced us to stay in our homes and brought global shipping to a near standstill, but judging by the nearly 300 projects that were ultimately entered into the Making Tech At Home Contest, it certainly didn’t stifle the creativity of the incredible Hackaday community.

While it’s never easy selecting the winners, we think you’ll agree that the Inverse Thermal Camera is really something special. Combining a surplus thermal printer, STM32F103 Blue Pill, and OV7670 camera module inside an enclosure made from scraps of copper clad PCB, the gadget prints out the captured images on a roll of receipt paper like some kind of post-apocalyptic lo-fi Polaroid.


The HexMatrix Clock also exemplified the theme of working with what you have, as the electronics were nothing more exotic than a string of WS2811 LEDs and either an Arduino or ESP8266 to drive them. With the LEDs mounted into a 3D printed frame and diffuser, this unique display has an almost alien beauty about it. If you like that concept and have a few more RGB LEDs laying around, then you’ll love the Hive Lamp which took a very similar idea and stretched it out into the third dimension to create a standing technicolor light source that wouldn’t be out of place on a starship.

Each of these three top projects will receive a collection of parts and tools courtesy of Digi-Key valued at $500.

Runners Up

Out friends at Digi-Key were also kind enough to provide smaller grab bags of electronic goodies to the creators of the following 30 projects to help them keep hacking in these trying times:

The Making Tech At Home Contest might be over, but unfortunately, it looks like COVID-19 will be hanging around for a bit. Hopefully some of these incredible projects will inspire you to make the most out of your longer than expected downtime.

Advancing the outage experience—automation, communication, and transparency

The content below is taken from the original ( Advancing the outage experience—automation, communication, and transparency), to continue reading please visit the site. Remember to respect the Author & Copyright.

“Service incidents like outages are an unfortunate inevitability of the technology industry. Of course, we are constantly improving the reliability of the Microsoft Azure cloud platform. We meet and exceed our Service Level Agreements (SLAs) for the vast majority of customers and continue to invest in evolving tools and training that make it easy for you to design and operate mission-critical systems with confidence.

In spite of these efforts, we acknowledge the unfortunate reality that—given the scale of our operations and the pace of change—we will never be able to avoid outages entirely. During these times we endeavor to be as open and transparent as possible to ensure that all impacted customers and partners understand what’s happening. As part of our Advancing Reliability blog series, series, I asked Sami Kubba, Principal Program Manager overseeing our outage communications process, to outline the investments we’re making to continue improving this experience.”—Mark Russinovich, CTO, Azure


In the cloud industry, we have a commitment to bring our customers the latest technology at scale, keeping customers and our platform secure, and ensuring that our customer experience is always optimal. For this to happen Azure is subject to a significant amount of change—and in rare circumstances, it is this change that can bring about unintended impact for our customers. As previously mentioned in this series of blog posts we take change very seriously and ensure that we have a systematic and phased approach to implementing changes as carefully as possible.

We continue to identify the inherent (and sometimes subtle) imperfections in the complex ways that our architectural designs, operational processes, hardware issues, software flaws, and human factors can align to cause service incidents—also known as outages. The reality of our industry is that impact caused by change is an intrinsic problem. When we think about outage communications we tend not to think of our competition as being other cloud providers, but rather the on-premises environment. On-premises change windows are controlled by administrators. They choose the best time to invoke any change, manage and monitor the risks, and roll it back if failures are observed.

Similarly, when an outage occurs in an on-premises environment, customers and users feel that they are more ‘in the know.’ Leadership is promptly made fully aware of the outage, they get access to support for troubleshooting, and expect that their team or partner company would be in a position to provide a full Post Incident Report (PIR)—previously called Root Cause Analysis (RCA)—once the issue is understood. Although our data analysis supports the hypothesis that time to mitigate an incident is faster in the cloud than on-premises, cloud outages can feel more stressful for customers when it comes to understanding the issue and what they can do about it.

Introducing our communications principles

During cloud outages, some customers have historically reported feeling as though they’re not promptly informed, or that they miss necessary updates and therefore lack a full understanding of what happened and what is being done to prevent future issues occurring. Based on these perceptions, we now operate by five pillars that guide our communications strategy—all of which have influenced our Azure Service Health experience in the Azure portal and include:

  1. Speed
  2. Granularity
  3. Discoverability
  4. Parity
  5. Transparency

Speed

We must notify impacted customers as quickly as possible. This is our key objective around outage communications. Our goal is to notify all impacted Azure subscriptions within 15 minutes of an outage. We know that we can’t achieve this with human beings alone. By the time an engineer is engaged to investigate a monitoring alert to confirm impact (let alone engaging the right engineers to mitigate it, in what can be a complicated array of interconnectivities including third-party dependencies) too much time has passed. Any delay in communications leaves customers asking, “Is it me or is it Azure?” Customers can then spend needless time troubleshooting their own environments. Conversely, if we decide to err on the side of caution and communicate every time we suspect any potential customer impact, our customers could receive too many false positives. More importantly, if they are having an issue with their own environment, they could easily attribute these unrelated issues to a false alarm being sent by the platform. It is critical that we make investments that enable our communications to be both fast and accurate.

Last month, we outlined our continued investment in advancing Azure service quality with artificial intelligence: AIOps. This includes working towards improving automatic detection, engagement, and mitigation of cloud outages. Elements of this broader AIOps program are already being used in production to notify customers of outages that may be impacting their resources. These automatic notifications represented more than half of our outage communications in the last quarter. For many Azure services, automatic notifications are being sent in less than 10 minutes to impacted customers via Service Health—to be accessed in the Azure portal, or to trigger Service Health alerts that have been configured, more on this below.

With our investment in this area already improving the customer experience, we will continue to expand the scenarios in which we can notify customers in less than 15 minutes from the impact start time, all without the need for humans to confirm customer impact. We are also in the early stages of expanding our use of AI-based operations to identify related impacted services automatically and, upon mitigation, send resolution communications (for supported scenarios) as quickly as possible.

Granularity

We understand that when an outage causes impact, customers need to understand exactly which of their resources are impacted. One of the key building blocks in getting the health of specific resources are Resource Health signals. The Resource Health signal will check if a resource, such as a virtual machine (VM), SQL database, or storage account, is in a healthy state. Customers can also create Resource Health alerts, which leverage Azure Monitor, to let the right people know if a particular resource is having issues, regardless of whether it is a platform-wide issue or not. This is important to note: a Resource Health alert can be triggered due to a resource becoming unhealthy (for example, if the VM is rebooted from within the guest) which is not necessarily related to a platform event, like an outage. Customers can see the associated Resource Health checks, arranged by resource type.

We are building on this technology to augment and correlate each customer resource(s) that has moved into an unhealthy state with platform outages, all within Service Health. We are also investigating how we can include the impacted resources in our communication payloads, so that customers won’t necessarily need to sign in to Service Health to understand the impacted resources—of course, everyone should be able to consume this programmatically.

All of this will allow customers with large numbers of resources to know more precisely which of their services are impacted due to an outage, without having to conduct an investigation on their side. More importantly, customers can build alerts and trigger responses to these resource health alerts using native integrations to Logic Apps and Azure Functions.

Discoverability

Although we support both ‘push’ and ‘pull’ approaches for outage communications, we encourage customers to configure relevant alerts, so the right information is automatically pushed out to the right people and systems. Our customers and partners should not have to go searching to see if the resources they care about are impacted by an outage—they should be able to consume the notifications we send (in the medium of their choice) and react to them as appropriate. Despite this, we constantly find that customers visit the Azure Status page to determine the health of services on Azure.

Before the introduction of the authenticated in-portal Service Health experience, the Status page was the only way to discover known platform issues. These days, this public Status page is only used to communicate widespread outages (for example, impacting multiple regions and/or multiple services) so customers looking for potential issues impacting them don’t see the full story here. Since we rollout platform changes as safely as possible, the vast majority of issues like outages only impact a very small ‘blast radius’ of customer subscriptions. For these incidents, which make up more than 95 percent of our incidents, we communicate directly to impacted customers in-portal via Service Health.

We also recently integrated the ‘Emerging Issues’ feature into Service Health. This means that if we have an incident on the public Status page, and we have yet to identify and communicate to impacted customers, users can see this same information in-portal through Service Health, thereby receiving all relevant information without having to visit the Status page. We are encouraging all Azure users to make Service Health their ‘one stop shop’ for information related to service incidents, so they can see issues impacting them, understand which of their subscriptions and resources are impacted, and avoid the risk of making a false correlation, such as when an incident is posted on the Status page, but is not impacting them.

Most importantly, since we’re talking about the discoverability principle, from within Service Health customers can create Service Health alerts, which are push notifications leveraging the integration with Azure Monitor. This way, customers and partners can configure relevant notifications based on who needs to receive them and how they would best be notified—including by email, SMS, LogicApp, and/or through a webhook that can be integrated into service management tools like ServiceNow, PagerDuty, or Ops Genie.

To get started with simple alerts, consider routing all notifications to email a single distribution list. To take it to the next level, consider configuring different service health alerts for different use cases—maybe all production issues notify ServiceNow, maybe dev and test or pre-production issues might just email the relevant developer team, maybe any issue with a certain subscription also sends a text message to key people. All of this is completely customizable, to ensure that the right people are notified in the right way.

Parity

All Azure users should know that Service Health is the one place to go, for all service impacting events. First, we ensure that this experience is consistent across all our different Azure Services, each using Service Health to communicate any issues. As simple as this sounds, we are still navigating through some unique scenarios that make this complex. For example, most people using Azure DevOps don’t interact with the Azure portal. Since DevOps does not have its own authenticated Service Health experience, we can’t communicate updates directly to impacted customers for small DevOps outages that don’t justify going to the public Status page. To support scenarios like this, we have stood up the Azure DevOps status page where smaller scale DevOps outages can be communicated directly to the DevOps community.

Second, the Service Health experience is designed to communicate all impacting events across Azure—this includes maintenance events as well as service or feature retirements, and includes both widespread outages and isolated hiccups that only impact a single subscription. It is imperative that for any impact (whether it is potential, actual or upcoming) customers can expect the same experience and put in place a predictable action plan across all of their services on Azure.

Lastly, we are working towards expanding our philosophy of this pillar to extend to other Microsoft cloud products. We acknowledge that, at times, navigating through our different cloud products such as Azure, Microsoft 365, and Power Platform can sometimes feel like navigating technologies from three different companies. As we look to the future, we are invested in harmonizing across these products to bring about a more consistent, best-in-class experience.

Transparency

As we have mentioned many times in the Advancing Reliability blog series, we know that trust is earned and needs to be maintained. When it comes to outages, we know that being transparent about what is happening, what we know, and what we don’t know is critically important. The cloud shouldn’t feel like a black box. During service issues, we provide regular communications to all impacted customers and partners. Often, in the early stages of investigating an issue, these updates might not seem detailed until we learn more about what’s happening. Even though we are committed to sharing tangible updates, we generally try to avoid sharing speculation, since we know customers make business decisions based on these updates during outages.

In addition, an outage is not over once customer impact is mitigated. We could still be learning about the complexities of what led to the issue, so sometimes the message sent at or after mitigation is a fairly rudimentary summation of what happened. For major incidents, we follow this up with a PIR generally within three days, once the contributing factors are better understood.

For incidents that may have impacted fewer subscriptions, our customers and partners can request more information from within Service Health by requesting a PIR for the incident. We have heard feedback in the past that PIRs should be even more transparent, so we continue to encourage our incident managers and communications managers to provide as much detail as possible—including information about the issue impact, and our next steps to mitigate future risk. Ideally to ensure that this class of issue is less likely and/or less impactful moving forward.

While our industry will never be completely immune to service outages, we do take every opportunity to look at what happened from a holistic perspective and share our learnings. One of the future areas of investment at which we are looking closely, is how best to keep customers updated with the progress we are making on the commitments outlined in our PIR next steps. By linking our internal repair items to our external commitments in our next steps, customers and partners will be able to track the progress that our engineering teams are making to ensure that corrective actions are completed.

Our communications across all of these scenarios (outages, maintenance, service retirements, and health advisories) will continue to evolve, as we learn more and continue investing in programs that support these five pillars.

Reliability is a shared responsibility

While Microsoft is responsible for the reliability of the Azure platform itself, our customers and partners are responsible for the reliability of their cloud applications—including using architectural best practices based on the requirements of each workload. Building a reliable application in the cloud is different from traditional application development. Historically, customers may have purchased levels of redundant higher-end hardware to minimize the chance of an entire application platform failing. In the cloud, we acknowledge up front that failures will happen. As outlined several times above, we will never be able to prevent all outages. In addition to Microsoft trying to prevent failures, when building reliable applications in the cloud your goal should be to minimize the effects of any single failing component.

To that end, we recently launched the Microsoft Azure Well-Architected Framework—a set of guiding tenets that can be used to improve the quality of a workload. Reliability is one of the five pillars of architectural excellence alongside Cost Optimization, Operational Excellence, Performance Efficiency, and Security. If you already have a workload running in Azure and would like to assess your alignment to best practices in one or more of these areas, try the Microsoft Azure Well-Architected Review.

Specifically, the Reliability pillar describes six steps for building a reliable Azure application. Define availability and recovery requirements based on decomposed workloads and business needs. Use architectural best practices to identify possible failure points in your proposed/existing architecture and determine how the application will respond to failure. Test with simulations and forced failovers to test both detection and recovery from various failures. Deploy the application consistently using reliable and repeatable processes. Monitor application health to detect failures, monitor indicators of potential failures, and gauge the health of your applications. Finally, respond to failures and disasters by determining how best to address it based on established strategies.

Returning to our core topic of outage communications, we are working to incorporate relevant Well-Architected guidance into our PIRs in the aftermath of each service incident. Customers running critical workloads will be able to learn about specific steps to improve reliability that would have helped to avoid and lessen impact from that particular outage. For example, if an outage only impacted resources within a single Availability Zone, we will call this out as part of the PIRs and encourage impacted customers to consider zonal redundancies for their critical workloads.

Going forward

We outlined how Azure approaches communications during and after service incidents like outages. We want to be transparent about our five communication pillars, to explain both our progress to date and the areas in which we’re continuing to invest. Just as our engineering teams endeavor to learn from each incident to improve the reliability of the platform, our communications teams endeavor to learn from each incident to be more transparent, to get customers and partners the right details to make informed decisions, and to support customers and partners as best as possible during each of these difficult situations.

We are confident that we are making the right investments to continuing improving in this space, but we are increasingly looking for feedback on whether our communications are hitting the mark. We include an Azure post-incident survey at the end of each PIR we publish. We strive to review every response to learn from our customers and partners and validate whether we are focusing on the right areas and to keep improving the experience.

We continue to identify the inherent (and sometimes subtle) imperfections in the complex ways that our architectural designs, operational processes, hardware issues, software flaws, and human factors align to cause outages. Since trust is earned and needs to be maintained, we are committed to being as transparent as possible—especially during these infrequent but inevitable service issues.

Apple will give third-party Mac repair shops its stamp of approval

The content below is taken from the original ( Apple will give third-party Mac repair shops its stamp of approval), to continue reading please visit the site. Remember to respect the Author & Copyright.

Getting your Mac fixed could soon be much easier. Apple says it will now verify third-party Mac repair shops, Reuters reports. The program will provide parts and training to qualifying repair stores.Apple began verifying third-party iPhone repair sho…

Google Cloud VMware Engine explained: Integrated networking and connectivity

The content below is taken from the original ( Google Cloud VMware Engine explained: Integrated networking and connectivity), to continue reading please visit the site. Remember to respect the Author & Copyright.

Editor’s note: This the first installment in a new blog series that dives deep into our Google Cloud VMware Engine managed service. Stay tuned for other entries on migration, integration, running stateful database workloads, and enabling remote workers, to name a few.

We recently announced the general availability of Google Cloud VMware Engine, a managed VMware platform service that enables enterprises to lift and shift their VMware-based applications to Google Cloud without changes to application architectures, tools or processes. With VMware Engine, you can deploy a private cloud—an isolated VMware stack—that consists of three or more nodes, enabling you to run VMware Cloud Foundation platform natively. This approach lets you retire or extend your data center to the cloud, use the cloud as a disaster recovery target, or migrate and modernize workloads by integrating with cloud-native services such as BigQuery, Cloud AI, etc.

But before you can do that, you need easy-to-provision, high-performance, highly available networking to connect between:

  • On-premises data centers and the cloud

  • VMware workloads and cloud-native services

  • VMware private clouds in single or multi-region deployments.

Google Cloud VMware Engine networking leverages existing connectivity services for on-premises connections and provides seamless connectivity to other Google Cloud services. Furthermore, the service is built on high-performance, reliable and high-capacity infrastructure, giving you a fast and highly available VMware experience, at a low cost.

Let’s take a closer look at some of the networking features you’ll find on VMware Engine. 

High Availability and 100G throughput

Google Cloud VMware Engine private clouds are deployed on enterprise-grade infrastructure with redundant and dedicated 100Gbps networking that provides 99.99% availability, low latency and high throughput.

Integrated networking and on-prem connectivity 

Subnets associated with private clouds are allocated in Google Cloud VPCs and delegated to VMware Engine. As a result, Compute Engine instances in the VPC communicate with VMware workloads using RFC 1918 private addresses, with no need for External IP-based addressing. 

Private clouds can be accessed from on-prem using existing Cloud VPN or Cloud Interconnect-based connections to Google Cloud VPCs without additional VPN or Interconnect attachments to VMware Engine private clouds. You can also stretch your on-prem networks to VMware Engine to facilitate workload migration.

Furthermore, for internet access, you can choose to use VMware Engine’s internet access service or route internet-bound traffic from on-prem to meet your security or regulatory needs.

Access to Google Cloud services from VMware Engine private clouds

VMware Engine workloads can access other Google Cloud services such as Cloud SQL, Cloud Storage, etc., using options such as Private Google Access and Private Service Access. Just like a Compute Engine instance in a VPC, a VMware workload can use private access options to communicate with Google Cloud services while staying within a secure and trusted Google Cloud network boundary. As such, you don’t need to exit out to the public internet to access Google Cloud services from VMware Engine, regardless of whether internet access is enabled or disabled. This provides for low-latency and secure communication between VMware Engine and other Google Cloud services.

Multi-region connectivity between VMware private clouds 

VMware workloads in private clouds in the same region can talk to one another directly—without needing to “trombone” or “hairpin” across the Google Cloud VPCs. In the case where VMware workloads need to communicate with one another across regions, they can do so using VMware Engine’s global routing service. This approach to multi-region connectivity doesn’t require a VPN, or any other latency-inducing connectivity options. 

Access to full NSX-T functionality

VMware Engine supports full NSX-T functionality for VMware workloads. With this, you can use VMware’s NSX-T policy-based UI or API to create network segments, gateway firewall policies or distributed/east-west firewall policies. In addition, you can also leverage NSX-T’s load balancer, NAT and service insertion functionality. 

Networking is critical to any Enterprise’s cloud transformation journey—even more so for VMware-related use cases. The networking functionality in VMware Engine makes it easy for you to take advantage of the scale, flexibility and agility that Google Cloud provides without compromising on functionality.

What’s next

In the coming weeks, we’ll share more about VMware Engine and migration, building business resiliency, enabling work from anywhere, and your enterprise database options. To learn more or to get started, visit the VMware Engine website where you’ll find detailed information on key features, use cases, product documentation, and pricing.