How to enable Chrome native notifications on Windows 10

The content below is taken from the original ( How to enable Chrome native notifications on Windows 10), to continue reading please visit the site. Remember to respect the Author & Copyright.

Windows 10 notification system contains the icons for easy access to the system functions. Google Chrome has recently rolled out a new notification experience update where it supports the native Windows 10 notifications. The new notification experience prompts all the […]

This post How to enable Chrome native notifications on Windows 10 is from TheWindowsClub.com.

ESXi on Arm? Yes, ESXi on Arm. VMware teases bare-metal hypervisor for 64-bit Arm servers

The content below is taken from the original ( ESXi on Arm? Yes, ESXi on Arm. VMware teases bare-metal hypervisor for 64-bit Arm servers), to continue reading please visit the site. Remember to respect the Author & Copyright.

No, we’re not pulling your leg

Coming as soon as they can make it

VMworld US VMware today showed off a port of its bare-metal ESXi hypervisor for 64-bit Arm servers at its VMworld US shindig in Las Vegas.…

New – Over-the-Air (OTA) Updates for Amazon FreeRTOS

The content below is taken from the original ( New – Over-the-Air (OTA) Updates for Amazon FreeRTOS), to continue reading please visit the site. Remember to respect the Author & Copyright.

Amazon FreeRTOS is an operating system for the microcontrollers that power connected devices such as appliances, fitness trackers, industrial sensors, smart utility meters, security systems, and the like. Designed for use in small, low-powered devices, Amazon FreeRTOS extends the FreeRTOS kernel with libraries for communication with cloud services such as AWS IoT Core and with more powerful edge devices that are running AWS Greengrass (to learn more, read Announcing Amazon FreeRTOS – Enabling Billions of Devices to Securely Benefit from the Cloud).

Unlike more powerful, general-purpose computers that include generous amounts of local memory and storage, and the ability to load and run code on demand, microcontrollers are often driven by firmware that is loaded at the factory and then updated with bug fixes and new features from time to time over the life of the device. While some devices are able to accept updates in the field and while they are running, others must be disconnected, removed from service, and updated manually. This can be disruptive, inconvenient, and expensive, not to mention time-consuming.

As usual, we want to provide a better solution for our customers!

Over-the-Air Updates
Today we are making Amazon FreeRTOS even more useful with the addition of an over-the-air update mechanism that can be used to deliver updates to devices in the field. Here are the most important properties of this new feature:

Security – Updates can be signed by an integrated code signer, streamed to the target device across a TLS-protected connection, and then verified on the target device in order to guard against corrupt, unauthorized, fraudulent updates.

Fault Tolerance – In order to guard against failed updates that can result in a useless, “bricked” device, the update process is resilient and able to handle partial updates from taking effect, leaving the device in an operable state.

Scalability – Device fleets often contain thousands or millions of devices, and can be divided into groups for updating purposes, powered by AWS IoT Device Management.

Frugality – Microcontrollers have limited amounts of RAM (often 128KB or so) and compute power. Amazon FreeRTOS makes the most of these scarce resources by using a single TLS connection for updates and other AWS IoT Core communication, and by using the lightweight MQTT protocol.

Each device must include the OTA Updates Library. This library contains an agent that listens for update jobs and supervises the update process.

OTA in Action
I don’t happen to have a fleet of devices deployed, so I’ll have to limit this post to the highlights and direct you to the OTA Tutorial for more info.

Each update takes the form of an AWS IoT job. A job specifies a list of target devices (things and/or thing groups) and references a job document that describes the operations to be performed on each target. The job document, in turn, points to the code or data to be deployed for the update, and specifies the desired code signing option. Code signing ensures that the deployed content is genuine; you can sign the content yourself ahead of time or request that it be done as part of the job.

Jobs can be run once (a snapshot job), or whenever a change is detected in a target (a continuous job). Continuous jobs can be used to onboard or upgrade new devices as they are added to a thing group.

After the job has been created, AWS IoT will publish an OTA job message via MQTT. The OTA Updates library will download the signed content in streaming fashion, supervise the update, and report status back to AWS IoT.

You can create and manage jobs from the AWS IoT Console, and can also build your own tools using the CLI and the API. I open the Console and click Create a job to get started:

Then I click Create OTA update job:

I select and sign my firmware image:

From there I would select my things or thing groups, initiate the job, and monitor the status:

Again, to learn more, check out the tutorial.

This new feature is available now and you can start using it today.

Jeff;

Power Over Ethernet Splitter Improves Negotiating Skills

The content below is taken from the original ( Power Over Ethernet Splitter Improves Negotiating Skills), to continue reading please visit the site. Remember to respect the Author & Copyright.

Implementing PoE is made interesting by the fact that not every Ethernet device wants power; if you start dumping power onto any device that’s connected, you’re going to break things. The IEEE 802.3af standard states that the device which can source power should detect the presence of the device receiving power, before negotiating the power level. Only once this process is complete can the power sourcing device give its full supply. Of course, this requires the burden of smarts, meaning that there are many cheap devices available which simply send power regardless of what’s plugged in (passive PoE).

[Jason Gin] has taken an old, cheap passive PoE splitter and upgraded it to be 802.3af compatible (an active device). The splitter was designed to be paired with a passive injector and therefore did not work with Jason’s active 802.3at infrastructure.

The brain of the upgrade is a TI TPS2378 Powered Device controller, which does the power negotiation. It sits on one of two new boards, with a rudimentary heatsink provided by some solar cell tab wire. The second board comprises the power interface, and consists of dual Schottky bridges as well a 58-volt TVS diode to deal with any voltage spikes due to cable inductance. The Ethernet transformer shown in the diagram above was salvaged from a dead Macbook and, after some enamel scraping and fiddly soldering, it was fit for purpose. For a deeper dive on Ethernet transformers and their hacked capabilities, [Jenny List] wrote a piece specifically focusing on Raspberry Pi hardware.

[Jason]’s modifications were able to fit in the original box, and the device successfully integrated with his 802.3at setup. We love [Jason]’s work and have previously written about his eMMC adventures, repairing windows tablets and explaining the intricacies of SD card interfacing.

lastpass-for-applications (4.1.59)

The content below is taken from the original ( lastpass-for-applications (4.1.59)), to continue reading please visit the site. Remember to respect the Author & Copyright.

The LastPass password manager for native Windows applications (e.g. Skype or PuTTY).

It may be poor man’s Photoshop, but GIMP casts a Long Shadow with latest update

The content below is taken from the original ( It may be poor man’s Photoshop, but GIMP casts a Long Shadow with latest update), to continue reading please visit the site. Remember to respect the Author & Copyright.

Open-source pixel botherer cranks it up to version 2.10.6

There appears to be no rest for Wilber as the GIMP team has updated the venerable image editor to version 2.10.6.…

Azure Marketplace consulting offers: May

The content below is taken from the original ( Azure Marketplace consulting offers: May), to continue reading please visit the site. Remember to respect the Author & Copyright.

We continue to expand the Azure Marketplace ecosystem. In May, 26 consulting offers successfully met the onboarding criteria and went live. See details of the new offers below:

App Migration to Azure 10-Day Workshop (2nd- Canada)

App Migration to Azure 10-Day Workshop: In this workshop, Imaginet’s Azure professionals will accelerate your application modernization and migration efforts while adding in the robustness and scalability of cloud technologies. For U.S. customers.

App Migration to Azure 10-Day Workshop

App Migration to Azure 10-Day Workshop: In this workshop, Imaginet’s Azure professionals will accelerate your application modernization and migration efforts while adding in the robustness and scalability of cloud technologies. For customers in Canada.

Application Migration & Sizing 1-Week Assessment - [UnifyCloud]

Application Migration & Sizing: 1-Week Assessment: We use CloudPilot’s static code analysis to scan the application source code and use configuration data to provide a detailed report of code-level changes to modernize your applications for the cloud.

AzStudio PaaS Platform 2-Day Proof of Concept - [Monza Cloud]

AzStudio PaaS Platform: 2-Day Proof of Concept: Receive a two-day proof of concept on our AzStudio platform and learn how it can rapidly accelerate Azure Platform-as-a-Service development.

Azure Architecture Design 2-Week Assessment [Sikich]

Azure Architecture Design 2-Week Assessment: In this assessment, Sikich will identify the client’s business requirements and create an Azure architecture design. This will include a structural design, a monthly estimated cost, and estimated installation services.

Azure Datacenter Migration 2-Week Assessment [Sikich]

Azure Datacenter Migration 2-Week Assessment: After listening to your business requirements and future business goals, we will develop an Azure datacenter migration plan to expand your IT capabilities with hosting on Azure.

Azure Enterprise-Class Networking 1-Day Workshop [Dynamics Edge]

Azure Enterprise-Class Networking: 1-Day Workshop: In this workshop, you will configure a virtual network with subnets in Azure, secure networks with firewall rules and route tables, and set up access to the virtual network with a jump box and a site-to-site VPN connection.

Azure Jumpstart 4-Day Workshop [US Medical IT]

Azure Jumpstart Workshop: This on-site workshop will involve looking at the client’s framework, setting up a VPN, extending on-premises migration of Active Directory on Microsoft Azure, and assisting the client in installing BitTitan’s Azure HealthCheck software.

Azure Migration Assessment 1 Day Assessment [Confluent]

Azure Migration Assessment: 1 Day Assessment: Confluent’s virtual assessment will focus on a server environment and determine the financial impact and technical implications of migrating to Microsoft Azure.

Azure Subscription 2-Wk Assessment [UnifyCloud]

Azure Subscription: 2-Wk Assessment: UnifyCloud will assess your Azure subscription and provide recommendations for how you can save money. Optimize and control your Azure resources for your budget, security standards, and regulatory baselines.

Azure Enterprise-Class Networking 1-Day Workshop [Dynamics Edge]

Big Data Analytics Solutions: 2-Day Workshop: Data professionals will learn how to design solutions for batch and real-time data processing. Different methods of using Azure will be discussed and practiced in lab exercises, such as Azure CLI, Azure PowerShell, and Azure Portal.

Cloud Governance 3-Wk Assessment [Cloudneeti]

Cloud Governance: 3-Wk Assessment: Within a fixed time and scope, our team of Azure architects will deliver an assessment spanning five key areas of Azure governance: Active Directory, subscription, resource group, resources, and policies.

Cloud HPC Consultation 1-Hr Briefing [UberCloud]

Cloud HPC Consultation 1-Hr Briefing: This one-hour custom online briefing is for technical and business leaders who want to learn how Cloud HPC can benefit their engineering simulations.

Azure Enterprise-Class Networking 1-Day Workshop [Dynamics Edge]

Continuous Delivery in VSTS/Azure: 1-Day Workshop: This instructor-led course is intended for data professionals who want to set up and configure Continuous Delivery within Azure using Azure Resource Manager (ARM) templates and Visual Studio Team Services (VSTS).

Digital Platform for Contracting 4-Hr Assessment [Vana Solutions]

Digital Platform for Contracting: 4-Hr Assessment: Vana’s contracting solution for government streamlines the contracting lifecycle. From contract initiation to records management, it provides a cloud or on-premises platform for agile acquisition.

Azure Subscription 2-Wk Assessment [UnifyCloud]

GDPR of On-Premise Environment: 2-Wk Assessment: Improve your GDPR compliance in two weeks with a data-driven assessment of your IT infrastructure and your cybersecurity and data protection solutions.

IoT and Computer Vision with Azure 4-Day Workshop [Agitare Technologies]

IoT and Computer Vision with Azure: 4-Day Workshop: Gain hands-on experience with Azure IoT services and Raspberry Pi by creating a simple home security solution in this four-day IoT workshop. The workshop is ideal for hardware vendors or technology professionals.

Kubernetes on Azure 2-Day Workshop [Architech]

Kubernetes on Azure 2-Day Workshop: Kubernetes is quickly becoming the container orchestration platform of choice for organizations deploying applications in the cloud. Ramp up your development and operations team members with this deeply technical boot camp.

Azure Enterprise-Class Networking 1-Day Workshop [Dynamics Edge]

Lift and Shift/Resource Mgr: 1Day Virtual Workshop: This workshop details how to migrate an on-premises procurement system to Azure, map dependencies onto Azure Infrastructure-as-a-Service virtual machines, and provide end-state design and high-level steps to get there.

Azure Subscription 2-Wk Assessment [UnifyCloud]

Managed Cost & Security: 2-Wk Assessment: Our managed service ensures you stay in control of all your Azure resources in terms of cost, security, governance and regulatory compliance (e.g., GDPR, PCI, ISO).

Azure Architecture Design 2-Week Assessment [Sikich]

Microsoft Azure App Hosting Design 1-Wk Assessment: An Azure application hosting design will be created based on the customer’s requirements. This will include a structural design, the monthly estimated cost, and estimated installation services.

Migrate Local MS SQL Database To Azure 4-Wk [Akvelon]

Migrate Local MS SQL Database To Azure: 4-Wk: Akvelon will migrate your local Microsoft SQL Server workload to Azure SQL Database cloud services or an Azure virtual machine in a quick, safe, and cost-effective manner.

Migrate Local MS SQL Database To Azure 4-Wk [Akvelon]

Migrate Your Websites: 5-Wk Implementation: Migrating your websites to Azure Infrastructure-as-a-Service is a big step for your business. Akvelon will migrate and deploy your web apps to Azure, allowing you to take advantage of Azure’s hosting services and scalability.

Azure Enterprise-Class Networking 1-Day Workshop [Dynamics Edge]

Modern Cloud Apps: 1-day Virtual Workshop: This workshop will teach you how to implement an end-to-end solution for e-commerce that is based on Azure App Service, Azure Active Directory, and Visual Studio Online. We suggest students first take the Azure Essentials course.

Permissioned Private Blockchain - 4-Wk PoC [Skcript]

Permissioned Private Blockchain – 4-Wk PoC: Before businesses can use blockchain to solve their problems, they must understand blockchain’s technical capabilities. Skcript helps you evaluate, ideate, and build blockchain solutions for your business needs.

Zero Dollar Down SAP Migration 1-Day Assessment [Wharfdale Technologies]

Zero Dollar Down SAP Migration 1-Day Assessment: Wharfedale Technologies empowers clients in SAP digital transformation by migrating SAP landscapes to Microsoft Azure for no upfront costs. This model helps customers overcome their budget and resource challenges.

Migrate Windows Server 2008 to Azure with Azure Site Recovery

The content below is taken from the original ( Migrate Windows Server 2008 to Azure with Azure Site Recovery), to continue reading please visit the site. Remember to respect the Author & Copyright.

For close to 10 years now, Windows Server 2008/2008 R2 has been a trusted and preferred server platform for our customers. With millions of instances deployed worldwide, our customers run many of their business applications including their most critical ones on the Windows Server 2008 platform.

With the end of support for Windows Server 2008 in January 2020 fast approaching, now is a great opportunity for customers running Windows Server 2008 to modernize your applications and infrastructure and take advantage of the power of Azure. But we know that the process of digital transformation doesn’t happen overnight. There are some great new offers that customers running their business applications on Windows Server 2008 can benefit from as they get started on their digital transformation journey in Azure. One of the options available to customers is the option to migrate servers running Windows Server 2008 to Azure and get extended security updates for three years past the end of support date, and this offer is available at no additional cost. In other words, if you choose to run your applications on Windows Server 2008 on Azure virtual machines, you get extended security updates for free. Further, with Azure Hybrid Benefit you can realize great savings on license costs on Windows Server 2008 machines you migrate to Azure.   

How do I migrate my servers?

This is where Azure Site Recovery comes in. Azure Site Recovery lets you easily migrate your Windows Server 2008 machines including the operating system, data and applications on it to Azure. All you need to do is perform a few basic setup steps, create storage accounts in your Azure subscription, and then get started with Azure Site Recovery by replicating servers to your storage accounts. Azure Site Recovery orchestrates the replication of data and lets you migrate replicating servers to Azure when you are ready. You can use Azure Site Recovery to migrate your servers running on VMware virtual machines, Hyper-V or physical servers.

I’m running the 32 bit version of Windows Server 2008. Can I use Azure Site Recovery?

Absolutely. Azure Site Recovery already supports migration of servers running Windows Server 2008 R2 to Azure. In order to make it easier for customers to take advantage of the new offers available to you, Azure Site Recovery has now added the ability to migrate servers running Windows Server 2008, including servers running 32-bit and 64-bit versions of Windows Server 2008 to Azure.

Customers eligible for the Azure Hybrid Benefit can configure Azure Site Recovery to apply the benefit to the servers you are migrating and save on licensing costs for Windows Server. Azure Site Recovery is also completely free to use if you complete your migration within 31 days.

Ready to migrate Windows Server 2008 32-bit and 64-bit machines to Azure? Get started.

Azure Block Blob Storage Backup

The content below is taken from the original ( Azure Block Blob Storage Backup), to continue reading please visit the site. Remember to respect the Author & Copyright.

Azure Blob Storage is Microsoft’s massively scalable cloud object store. Blob Storage is ideal for storing any unstructured data such as images, documents and other file types. Read this Introduction to object storage in Azure to learn more about how it can be used in a wide variety of scenarios.

The data in Azure Blob Storage is always replicated to ensure durability and high availability. Azure Storage replication copies your data so that it is protected from planned and unplanned events ranging from transient hardware failures, network or power outages, massive natural disasters, and so on. You can choose to replicate your data within the same data center, across zonal data centers within the same region, and even across regions. Find more details on storage replication.

Although Blob storage supports replication out-of-box, it’s important to understand that the replication of data does not protect against application errors. Any problems at the application layer are also committed to the replicas that Azure Storage maintains. For this reason, it can be important to maintain backups of blob data in Azure Storage. 

Currently Azure Blob Storage doesn’t offer an out-of-the-box solution for backing up block blobs. In this blog post, I will design a back-up solution that can be used to perform weekly full and daily incremental back-ups of storage accounts containing block blobs for any create, replace, and delete operations. The solution also walks through storage account recovery should it be required.

The solution makes use of the following technologies to achieve this back-up functionality:

In our scenario, we will publish events to Azure Storage Queues to support daily incremental back-ups.

  • Azcopy – AzCopy is a command-line utility designed for copying data to/from Microsoft Azure Blob, File, and Table storage, using simple commands designed for optimal performance. You can copy data between a file system and a storage account, or between storage accounts. In our scenario we will use AzCopy to achieve full back-up functionality and will use it to copy the content from one storage account to another storage account.
  • EventGrids – Azure Storage events allow applications to react to the creation and deletion of blobs and it does so without the need for complicated code or expensive and inefficient polling services. Instead, events are pushed through Azure Event Grids to subscribers such as Azure Functions, Azure Logic Apps, or Azure Storage Queues.
  • Event Grid extension –  To store the storage events to Azure Queue storage. At the time of writing this blog, this feature is in preview. To use it, you must install the Event Grid extension for Azure CLI. You can install it with az extension add –name eventgrid.
  • Docker Container – To host the listener to read the events from Azure Queue Storage. Please note the sample code given with the blog is a .Net core application and can be hosted on a platform of your choice and it has no dependency on docker containers.
  • Azure Table Storage – This is used to keep the events metadata of incremental back-up and used while performing the re-store. Please note, you can have the events metadata stored in a database of your choice like Azure SQL, Cosmos DB etc. Changing the database will require code changes in the samples solution.

Introduction

Based on my experience in the field, I have noticed that most customers require full and incremental backups taken on specific schedules. Let’s say you have a requirement to have weekly full and daily incremental backups. In the case of a disaster, you need a capability to restore the blobs using the backup sets.

High Level architecture/data flow

Here is the high-level architecture and data flow of the proposed solution to support incremental back-up.

clip_image002

clip_image002[6]

Here is the detailed logic followed by the .Net Core based listener while copying the data for an incremental backup from the source storage account to the destination storage account.

While performing the back-up operation, the listener performs the following steps:image

  1. Creates a new blob container in the destination storage account for every year like “2018”.
  2. Creates a logical sub folder for each week under the year container like “wk21”. In case there are no files created or deleted in wk21 no logical folder will be created. CalendarWeekRule.FirstFullWeek has been used to determine the week number.
  3. Creates a logical sub folder for each day of the week under the year and week container like dy0, dy1, dy2. In case there are no files created or deleted for a day no logical folder will be created for that day.
  4. While copying the files, the listener changes the source container names to logical folder names in the destination storage account.

Example:

SSA1 (Source Storage Account) -> Images (Container) –> Image1.jpg

Will move to:

DSA1 (Destination Storage Account) -> 2018 (Container)-> WK2 (Logical Folder) -> dy0 (Logical Folder) -> Images (Logical Folder) –> Image1.jpg

Here are the high-level steps to configure incremental backup

  1. Create a new storage account (destination) where you want to take the back-up.
  2. Create an event grid subscription for the storage account (source) to store the create/replace and delete events into Azure Storage queue. The command to set up the subscription is provided on the samples site.
  3. Create a table in Azure Table storage where the event grid events will finally be stored by the .Net Listener.
  4. Configure the .Net Listener (backup.utility) to start taking the incremental backup. Please note there can be as many as instances of this listener as needed to perform the backup, based the load on your storage account. Details on the listener configuration are provided on the samples site.

Here are the high-level steps to configure full backup

  1. Schedule AZCopy on the start of week, i.e., Sunday 12:00 AM to move the complete data from the source storage account to the destination storage account.
  2. Use AZcopy to move the data in a logical folder like “fbkp” to the corresponding year container and week folder in the destination storage account.
  3. You can schedule AZCopy on a VM, on a Jenkins job, etc., depending on your technology landscape.

In case of a disaster, the solution provides an option to restore the storage account by choosing one weekly full back-up as a base and applying the changes on top of it from an incremental back-up. Please note the suggested option is one of the options: you may choose to restore by applying only the logs from incremental backup, but it can take longer depending on the period of re-store.

Here are the high-level steps to configure restore

  1. Create a new storage account (destination) where the data needs to be restored.
  2. Move data from full back up folder “fbkp” using AZCopy to the destination storage account.
  3. Initiate the incremental restore process by providing the start date and end date to restore.utility. Details on the configuration is provided on samples site.

For example: Restore process reads the data from the table storage for the period 01/08/2018 to 01/10/2018 sequentially to perform the restore.

For each read record, the restore process adds, updates, or deletes the file in the destination storage account.

Supported Artifacts

Find source code and instructions to setup the back-up solution.

Considerations/limitations

  • Blob Storage events are available in Blob Storage accounts and in General Purpose v2 storage accounts only. Hence the storage account configured for the back-up should either be Blob storage account or General Purpose V2 account. Find out more by visiting our Reacting to Blob Storage events documentation.
  • Blob storage events are fired for create, replace and deletes. Hence, modifications to the blobs are not supported at this point of time but it will be eventually supported.
  • In case a user creates a file at T1 and deletes the same file at T10 and the backup listener has not copied that file, you won’t be able to restore that file from the backup. For these kind of scenarios, you can enable soft delete on your storage account and either modify the solution to support restoring from soft delete or recover these missed files manually.
  • Since restore will execute the restore operation by reading the logs sequentially it can take considerable amount of time to complete. The actual time can span hours or days and the correct duration can be determined only by performing a test.
  • AZCopy to be used to perform the weekly full back up. The duration of execution will depend on the data size and can span hours or days.

Conclusion

In this blog post, I’ve described a proof of concept for how you would add incremental backup support to a separate storage account for Azure Blobs. The necessary code samples, description, as well as background to each step is described to allow you to create your own solution customized for what you need.

    Meet the mini.m and see the si.zeRO in London on Monday

    The content below is taken from the original ( Meet the mini.m and see the si.zeRO in London on Monday), to continue reading please visit the site. Remember to respect the Author & Copyright.

    Both small, but still visible without the use of a magnifying glass! This year’s Wakefield show (report coming to be started soon at some point) saw the launch of a new computer from R-Comp, called the mini.m. The system is most easily described as a mini version of the ARMSX ARMX6, squeezed down so that […]

    Work-Bench enterprise report predicts end of SaaS could be coming

    The content below is taken from the original ( Work-Bench enterprise report predicts end of SaaS could be coming), to continue reading please visit the site. Remember to respect the Author & Copyright.

    Work-Bench, a New York City venture capital firm that spends a lot of time around Fortune 1000 companies, has put together The Work-Bench Enterprise Almanac: 2018 Edition, which you could think of as a State of the Enterprise report. It’s somewhat like Mary Meeker’s Internet Trends report, but with a focus on the tools and technologies that will be having a major impact on the enterprise in the coming year.

    Perhaps the biggest take-away from the report could be that the end of SaaS as we’ve known could be coming if modern tools make it easier for companies to build software themselves. More on this later.

    While the report writers state that their findings are based at least partly on anecdotal evidence, it is clearly an educated set of observations and predictions related to the company’s work with enterprise startups and the large companies they tend to target.

    As they wrote in their Medium post launching the report, “Our primary aim is to help founders see the forest from the trees. For Fortune 1000 executives and other players in the ecosystem, it will help cut through the noise and marketing hype to see what really matters.” Whether that’s the case will be in the eye of the reader, but it’s a comprehensive attempt to document the state of the enterprise as they see it, and there are not too many who have done that.

    The big picture

    The report points out the broader landscape in which enterprise companies — startups and established players alike — are operating today. You have traditional tech companies like Cisco and HP, the mega cloud companies like Amazon, Microsoft and Google, the Growth Guard with companies like Snowflake, DataDog and Sumo Logic and the New Guard, those early stage enterprise companies gunning for the more established players.

    As the report states, the mega cloud players are having a huge impact on the industry by providing the infrastructure services for startups to launch and grow without worrying about building their own data centers or scaling to meet increasing demand as a company develops.

    The mega clouders also scoop up a fair number of startups. Yet they don’t devote quite the level of revenue to M&A as you might think based on how acquisitive the likes of Salesforce, Microsoft and Oracle have tended to be over the years. In fact, in spite of all the action and multi-billion deals we’ve seen, Work-Bench sees room for even more.

    It’s worth pointing out that Work-Bench predicts Salesforce itself could become a target for mega cloud M&A action. They are predicting that either Amazon or Microsoft could buy the CRM giant. We saw such speculation several years ago and it turned out that Salesforce was too rich for even these company’s blood. While they may have more cash to spend, the price has probably only gone up as Salesforce acquires more and more companies and its revenue has surpassed $10 billion.

    About those mega trends

    The report dives into 4 main areas of coverage, none of which are likely to surprise you if you read about the enterprise regularly in this or other publications:

    • Machine Learning
    • Cloud
    • Security
    • SaaS

    While all of these are really interconnected as SaaS is part of the cloud and all need security and will be (if they aren’t already) taking advantage of machine learning. Work-Bench is not seeing it in such simple terms, of course, diving into each area in detail.

    The biggest take-away is perhaps that infrastructure could end up devouring SaaS in the long run. Software as a Service grew out of couple of earlier trends, the first being the rise of the Web as a way to deliver software, then the rise of mobile to move it beyond the desktop. The cloud-mobile connection is well documented and allowed companies like Uber and Airbnb, as just a couple of examples, to flourish by providing scalable infrastructure and a computer in our pockets to access their services whenever we needed them. These companies could never have existed without the combination of cloud-based infrastructure and mobile devices.

    End of SaaS dominance?

    But today, Work-Bench is saying that we are seeing some other trends that could be tipping the scales back to infrastructure. That includes containers and microservices, serverless, Database as a Service and React for building front ends. Work-Bench argues that if every company is truly a software company, these tools could make it easier for companies to build these kind of services cheaply and easily, and possibly bypass the SaaS vendors.

    What’s more, they suggest that if these companies are doing mass customization to these services, then it might make more sense to build instead of buy, at least on one level. In the past, we have seen what happens when companies try to take these kinds of massive software projects on themselves and it hardly ever ended well. They were usually bulky, difficult to update and put the companies behind the curve competitively. Whether simplifying the entire developer tool kit would change that remains to be seen.

    They don’t necessarily see companies running wholesale away from SaaS just yet to do this, but they do wonder if developers could push this trend inside of organizations as more tools appear on the landscape to make it easier to build your own.

    The remainder of the report goes in depth into each of these trends, and this article just has scratched the surface of the information you’ll find there. The entire report is embedded below.

    Programmable Badge uses E-Ink and ESP8266

    The content below is taken from the original ( Programmable Badge uses E-Ink and ESP8266), to continue reading please visit the site. Remember to respect the Author & Copyright.

    You’ve probably noticed that the hacker world is somewhat enamored with overly complex electronic event badges. Somewhere along the line, we went from using a piece of laminated paper on a lanyard to custom designed gadgets that pack in enough hardware that they could have passed for PDAs not that long ago. But what if there was a way to combine this love for weighing down one’s neck with silicon jewelry and the old school “Hello my name is…” stickers?

    [Squaro Engineering] might have the solution with Badgy, their multi-function e-ink name…well, badge. Compatible with the Arduino SDK, it can serve as anything from a weather display to a remote for your smart home. Oh, and we suppose in an absolute emergency it could be used to avoid having to awkwardly introduce yourself to strangers.

    Powered by an ESP-12F, Badgy features a 2.9″ 296×128 E-Ink display and a five-way tactical switch for user input. The default firmware includes support for WiFiManager and OTA updates to make uploading your own binaries as easy as possible, and a number of example Sketches are provided to show you the ropes. Powered by a LIR2450 3.6 V lithium-ion rechargeable coin cell, it can run for up to 35 days in deep sleep or around 5 hours of heavy usage.

    Schematics, source code, and a Bill of Materials are all available under the MIT license if you want to try your hand at building your own, and assembled badges are available on Tindie. While it might not be as impressive as a retro computer hanging around your neck, it definitely looks like an interesting platform to hack on.

    Repair or Replace

    The content below is taken from the original ( Repair or Replace), to continue reading please visit the site. Remember to respect the Author & Copyright.

    Just make sure all your friends and family are out of the car, or that you've made backup friends and family at home.

    XYZPrinting announces the da Vinci Color Mini

    The content below is taken from the original ( XYZPrinting announces the da Vinci Color Mini), to continue reading please visit the site. Remember to respect the Author & Copyright.

    XYZPrinting may have finally cracked the color 3D printing code. Their latest machine, the $1,599 da Vinci Color Mini is a full color printer that uses three CMY ink cartridges to stain the filament as it is extruded, allowing for up to 15 million color combinations.

    The printer is currently available for pre-order on Indiegogo for $999.

    The printer can build objects 5.1″ x 5.1″ x 5.1″ in size and it can print PLA or PETG. A small ink cartridge stains the 3D Color-inkjet PLA as it comes out, creating truly colorful objects.

    “Desktop full-color 3D printing is here. Now, consumers can purchase an easy-to-operate, affordable, compact full-color 3D printer for $30,000 less than market rate. This is revolutionary because we are giving the public access to technology that was once only available to industry professionals,” said Simon Shen, CEO of XYZprinting.

    The new system is aimed at educational and home markets and, at less than a $1,000, it hits a unique and important sweet spot in terms of price. While the prints aren’t perfect, being able to print in full color for the price of a nicer single color 3D printer is pretty impressive.

    Performing VM mass migrations to Google Cloud with Velostrata

    The content below is taken from the original ( Performing VM mass migrations to Google Cloud with Velostrata), to continue reading please visit the site. Remember to respect the Author & Copyright.

    Back in May, Google Cloud announced its intent to acquire Velostrata and its enterprise cloud migration technology. Since then, our Professional Services Organization has participated in large VM migration initiatives and observed the tremendous value that you, our customers gain from using Velostrata to migrate their virtualized workloads to Google Cloud Platform (GCP).

    Velostrata decouples compute from storage and adapts workloads on the fly for execution in GCP. This means that your workloads start running almost immediately in GCP while the cold data migration takes place transparently in the background, but without any performance degradation to your workload. While following a prescribed process, our customers used Velostrata to migrate hundreds of VMs in just a few short weeks, and were able to avoid the operational difficulties that traditional migration methods interject. Through these experiences, our Professional Services team formed a lift-and-shift practice to assist our enterprise customers complete high-throughput VM migrations.

    The secret to these migrations’ success is Velostrata’s smarts. Rather than take a deep copy mirroring approach, Velostrata uses an agentless technology to bootstrap VMs on the cloud, then streams in real-time compressed, deduped, and encrypted disk blocks from the original VM. Within minutes, the replacement cloud VM is up in the environment that was prepared for it and has access to the full original context. Your migration ops team quickly performs a smoke test of the application on the cloud and hands the app back to its owners. Velostrata, meanwhile, transfers the full content of the virtualized server transparently in the background. This all happens while maintaining data consistency in case a rollback is needed. When the cloud server is ready, Velostrata then cuts over (“detaches”) the newly deployed cloud environment and completes the migration.

    This Velostrata architectural view shows the components, deployment and interactions.
    This Velostrata architectural view shows the components, deployment and interactions.

    How the Velostrata migration process works

    Our experience building the Velostrata lift-and-shift practice demonstrated that dividing migration efforts into three major phases yielded the most optimal results.

    1. An initial proof of concept of the technology to build user confidence in the technologies and methodology.  

    2. Foundational activities, including setting up the GCP environments to receive the applications and the required shared services (LDAP, DNS, SSO, etc.).

    3. The migration phase itself, organized into multiple biweekly sprints.

    The migration phase is composed of three main activities, each with its own dedicated team, starting with discovery, followed by the migration itself and finally a post-migration transition. These activities take place in sequence and repeat on every sprint. Let’s explore these further.

    1. Discovery

    This first step in the migration, the discovery process, gathers relevant information about the source servers (configuration and dependencies) to be migrated. During this phase, the migration schedule for the sprint is established and communicated to the application teams. The discovery step is the foundation for a successful migration, and thus takes a disproportionate amount of time compared to subsequent steps. In our experience, it’s common to use the first week out of a two-week migration sprint for discovery.

    At a high level, the discovery phase process can be summarized in the following set of tasks:

    1. Evaluate the migration schedule to date and identify the next batch of servers to migrate.

    2. Model the characteristics of the servers to be migrated in order to identify dependencies and required services.

    3. In parallel with above, send application owners a questionnaire that gathers technical information about the servers.

    4. Compare and reconcile information about the servers’ configurations and dependencies from the questionnaire.

    5. Set, refine and communicate the final migration schedule.

    The questionnaire, in turn, should gather the following data points:

    • Application team operational points of contacts

    • Machine specifications (cores, RAM, local disk, etc.)

    • Operating system and specific configurations

    • Network ingress/egress bandwidth needs

    • Associated firewall rules

    • DNS entries

    • Databases hosted/accessed

    • Required attached storage

    • Load balancing needs

    • Authentication needs

    • Integrated monitoring systems

    • Associated cost center

    2. Migration

    The actual migration moves target VMs from on-premises data centers (and other cloud providers) into GCP. Because of Velostrata’s sophistication and automation, the migration is mostly transparent to the application, involving only a short (5 to 10 minutes) downtime that is both upfront and easy to predict and schedule, as well as almost no operational work. In our process, migration takes place during the second week of a two-week sprint.

    Sample Sprint Migration
    Sample two week sprint migration flow, where week 1 is dedicated to discovery and planning, and week 2 is dedicated to executing the accelerated migration sprint. We see in the graph that all VMs were running successfully very quickly, within the first day, while the data migration continued in the background until completing a few days later. In total, 75 VMs and 6 Terabytes of data were migrated in about 3 days.

    3. Post migration

    The post-migration step involves placing the migrated VMs and applications under observation and responding to any user-reported incidents, usually for a duration of a few days. The app then transitions to regular operations and you can decommission the original servers.

    Using Velostrata for your migrations

    We are very excited to see these early successes with our enterprise customers’ mass migrations to GCP, and we are even more excited to offer the Velostrata migration solution to GCP customers for free when migrating  to GCP. When you use Velostrata within this lift-and-shift practice, migrating large numbers of VMs to the cloud has never been easier. If your company is planning a large migration of servers and would like our help, feel free to reach out to your GCP sales representative and ask to involve Google Professional Services. Happy migration!

    IBM teams with Maersk on new blockchain shipping solution

    The content below is taken from the original ( IBM teams with Maersk on new blockchain shipping solution), to continue reading please visit the site. Remember to respect the Author & Copyright.

    IBMand shipping giant Maersk having been working together for the last year developing a blockchain-based shipping solution called TradeLens. Today they moved the project from Beta into limited availability.

    Marie Wieck, GM for IBM Blockchain says the product provides a way to digitize every step of the global trade workflow, transforming it into a real-time communication and visual data sharing tool.

    TradeLens was developed jointly by the two companies with IBM providing the underlying blockchain technology and Maersk bringing the worldwide shipping expertise. It involves three components: the blockchain, which provides a mechanism for tracking goods from factory or field to delivery, APIs for others to build new applications on top of the platform these two companies have built, and a set of standards to facilitate data sharing among the different entities in the workflow such as customs, ports and shipping companies.

    Wieck says the blockchain really changes how companies have traditionally tracked shipped goods. While many of the entities in the system have digitized the process, the data they have has been trapped in silos siloes and previous attempts at sharing like EDI have been limited. “The challenge is they tend to think of a linear flow and you really only have visibility one [level] up and one down in your value chain,” she said.

    The blockchain provides a couple of obvious advantages over previous methods. For starters, she says it’s safer because data is distributed, making it much more secure with digital encryption built in. The greatest advantage though is the visibility it provides. Every participant can check any aspect of the flow in real time, or an auditor or other authority can easily track the entire process from start to finish by clicking on a block in the blockchain instead of requesting data from each entity manually.

    While she says it won’t entirely prevent fraud, it does help reduce it by putting more eyeballs onto the process. “If you had fraudulent data at start, blockchain won’t help prevent that. What it does help with is that you have multiple people validating every data set and you get greater visibility when something doesn’t look right,” she said.

    As for the APIs, she sees the system becoming a shipping information platform. Developers can build on top of that, taking advantage of the data in the system to build even greater efficiencies. The standards help pull it together and align with APIs, such as providing a standard Bill of Lading. They are starting by incorporating existing industry standards, but are also looking for gaps that slow things down to add new standard approaches that would benefit everyone in the system.

    So far, the companies have 94 entities in 300 locations around the world using TradeLens including customs authorities, ports, cargo shippers and logistics companies. They are opening the program to limited availability today with the goal of a full launch by the end of this year.

    Wieck ultimately sees TradeLens as a way to facilitate trade by building in trust, the end of goal of any blockchain product. “By virtue of already having an early adopter program, and having coverage of 300 trading locations around the world, it is a very good basis for the global exchange of information. And I personally think visibility creates trust, and that can help in a myriad of ways,” she said.

    Azure Data Factory Visual tools now supports GitHub integration

    The content below is taken from the original ( Azure Data Factory Visual tools now supports GitHub integration), to continue reading please visit the site. Remember to respect the Author & Copyright.

    GitHub is a development platform that allows you to host and review code, manage projects and build software alongside millions of other developers from open source to business. Azure Data Factory (ADF) is a managed data integration service in Azure that allows you to iteratively build, orchestrate, and monitor your Extract Transform Load (ETL) workflows. You can now integrate your Azure Data Factory with GitHub. The ADF visual authoring integration with GitHub allows you to collaborate with other developers, do source control, versioning of your data factory assets (pipelines, datasets, linked services, triggers, and more). Simply click ‘Set up Code Repository’ and select ‘GitHub’ from the Repository Type dropdown to get started.

    image

    image

    ADF-GitHub integration allows you to use either public Github or GitHub Enterprise depending on your requirements. You can use OAuth authentication to login to your GitHub account. ADF automatically pulls the repositories in your GitHub account that you can select. You can then choose the branch that developers in your team can use to do collaboration. You can also easily import all your current data factory resources to your GitHub repository.

    image

    image

    Once you enable ADF-GitHub integration, you can now save your data factory resources anytime in GitHub. ADF automatically saves the code representation of your data factory resources (pipelines, datasets, and more ) to your GitHub repository. Get more information and detailed steps on enabling Azure Data Factory-GitHub integration.

    Our goal is to continue adding features and improve the usability of Data Factory tools. Get started building pipelines easily and quickly using Azure Data Factory. If you have any feature requests or want to provide feedback, please visit the Azure Data Factory forum.

    Amazon launches an Alexa Auto SDK to bring its voice assistant to more cars

    The content below is taken from the original ( Amazon launches an Alexa Auto SDK to bring its voice assistant to more cars), to continue reading please visit the site. Remember to respect the Author & Copyright.

    Amazon this morning announced the launch of a toolkit for developers that will allow them to integrate Alexa into cars’ infotainment systems. The “Alexa Auto SDK” is available now on GitHub, and includes all the core Alexa functions like streaming media, smart home controls, weather reports, and support for Alexa’s tens of thousands of third-party skills. It will also add new features just for auto users, like navigation and search, Amazon says.

    The source code and function libraries will be in C++ and Java, allowing the vehicles to process audio inputs and triggers, then connect with the Alexa service, and handle the Alexa interactions.

    In addition, Amazon is offering a variety of sample apps, build scripts, and documentation supporting Android and QNX operating systems on ARM and x86 processor architectures.

    The SDK will allow for streaming media from Amazon Music, iHeartRadio, and Audible, for the time being, and will allow customers to place calls by saying the contact’s name or phone number. These will be launched over the native calling service in the vehicle.

    Plus, it can tap into a native turn-by-turn navigation system, when customers specify an address or point of interest, or if they cancel the navigation.

    A local search feature lets customers search for restaurants, movie theaters, grocery stores, hotels, and other business, and navigate to the location.

    This is not the first time Alexa has come to cars, by any means. Amazon has been working with car makers like Ford, BMW, SEAT, Lexus andToyota, who have been integrating the voice assistant into select vehicles. Alexa is also available in older cars through a variety of add-on devices, like those from AnkerMuse (Speak Music)Garmin, and Logitech, for example.

    With this SDK, Amazon is opening the voice assistant to other developers building for auto, who don’t yet have a relationship with Amazon.

    ‘InPrivate Desktop’ Coming to Windows 10 Enterprise

    The content below is taken from the original ( ‘InPrivate Desktop’ Coming to Windows 10 Enterprise), to continue reading please visit the site. Remember to respect the Author & Copyright.

    Earlier this month, BleepingComputer.com ran a report on a new security feature in Windows 10 that was exposed during a bug-bash quest in the Feedback Hub. The new feature is installed as an app from the Microsoft Store. But according to Lawrence Abrams, the app wasn’t available in the Store despite the instructions found in the Feedback Hub.

    The text of the quest read: “Microsoft is Developing a Sandboxed “InPrivate Desktop” Deskop” for Windows 10 Enterprise. InPrivate Desktop (Preview) provides admins a way to launch a throwaway sandbox for secure, one-time execution of untrusted software. This is basically an in-box, speedy VM that is recycled when you close the app!”

    The prerequisites were listed as follows:

    • Windows 10 Enterprise
    • Builds 17718+
    • Branch: Any
    • Hypervisor capabilities enabled in BIOS
    • At least 4GB of RAM
    • At least 5GB free disk space
    • At least 2 CPU cores

    I tried to access a link provided in the text, referring to feature limitations, but it requires a Microsoft account associated with the Microsoft tenant. I suspect that this feature was only available for internal testing at the time of the bug bash.

    What is InPrivate Desktop for?

    While Windows 10 Enterprise users have the right to run one Windows 10 virtual machine, someone needs to set up the VM and potentially maintain it. But InPrivate Desktop looks to provide a readymade environment that users can spin up with no configuration and easily start from scratch each time InPrivate Desktop is launched. I don’t have any new technical details to share, but I think that InPrivate Desktop works like Windows Defender Application Guard (WDAG) and is based on container technology.

    WDAG provides Microsoft Edge users with a secure environment where the browser runs in a container that protects the underlying operating system if the browser session is exploited. WDAG was originally only available in the Enterprise SKU but Microsoft recently made it available to Windows 10 Professional users also. For more information on Windows Defender Application Guard, see Protect Users Against Malicious Websites Using Windows 10 Application Guard and Revisiting Application Guard in the Windows 10 April 2018 Update on Petri.

    If InPrivate Desktop turns out to work like WDAG, it will be a useful addition to the OS for organizations that want to remove administrative rights from users. One of the biggest issues with removing rights is that users can no longer install software that requires administrator privileges. InPrivate Desktop would give organizations more scope to remove administrative rights but still allow users some freedom to test new software or experiment with settings that aren’t available to standard users.

    Developers and system administrators might also find InPrivate Desktop useful when they need to spin up a test environment but don’t want to step through the Windows setup process. Although there’s no word yet if and when InPrivate Desktop will make it into Windows.

    Follow Russell on Twitter @smithrussell.

    The post ‘InPrivate Desktop’ Coming to Windows 10 Enterprise appeared first on Petri.

    How to remove Compatibility Tab from File Properties in Windows 10

    The content below is taken from the original ( How to remove Compatibility Tab from File Properties in Windows 10), to continue reading please visit the site. Remember to respect the Author & Copyright.

    Tthere are various releases of Windows like the latest release called as Windows 10, Windows 8.1, Windows 8, Windows 7 and so on. With every release, they released a set of new feature sets for applications called APIs. These APIs […]

    This post How to remove Compatibility Tab from File Properties in Windows 10 is from TheWindowsClub.com.

    Security in plaintext: use Shielded VMs to harden your GCP workloads

    The content below is taken from the original ( Security in plaintext: use Shielded VMs to harden your GCP workloads), to continue reading please visit the site. Remember to respect the Author & Copyright.

    Trust is a prerequisite of moving to the cloud. When evaluating a cloud provider, you want to know that it helps keep your information safe, helps protect you from bad actors, and that you’re in control of your workloads.

    Trust has to be maintained starting from hardware and firmware, as well as host and guest operating systems. For example, a BIOS can be dynamically compromised by a bad NetBoot, or act as a “confused deputy” based on untrusted input reported by BIOS configuration parameters, leaving the OS vulnerable to privilege escalation attacks. A guest OS can also be dynamically compromised by attacking its kernel components via remote attack, by local code gaining escalation privileges, or by insiders (e.g., your privileged employees).

    That’s why we recently introduced Shielded VMs in beta, so you can be confident that workloads running on Google Cloud Platform (GCP) haven’t been penetrated by boot malware or firmware rootkits. Unfortunately, these threats can stay undetected for a long time, and the infected virtual machine continues to boot in a compromised state even after you’ve installed legitimate software. Working alongside Titan, a custom chip that establishes root-of-trust, Shielded VMs help assure you that when your VM boots, it’s running code that hasn’t been compromised.

    Shielded VMs provide the following security features:

    • Trusted firmware based on Unified Extended Firmware Interface (UEFI) 2.3.1 to replace legacy BIOS sub-systems and enable UEFI Secure Boot capability

    • vTPM, a virtual Trusted Platform Module, which validates your guest VM pre-boot and boot integrity, and generates and protects encryption keys. vTPM is fully compatible with Trusted Computing Group TPM 2.0 specifications validated with FIPS 140-2 L1 cryptography. vTPM is required to enable Measured Boot. In addition, vTPM also enables the guest operating system to generate and securely protect keys or other secrets.

    • Secure Boot and Measured Boot to help protect VMs against boot- and kernel-level malware and rootkits.

    • Integrity measurements collected as part of Measured Boot that are available to you via Stackdriver and help to identify any mismatch between the “healthy” baseline of your VM and current runtime state.

    Secure and Measured Boot help ensure that your VM boots a known firmware and kernel software stack. vTPM underpins Measured Boot by providing guest VM instances with cryptographic functionality, i.e., cryptographically verifying this stack and allowing the VM to gain (or fail to gain) access to cloud resources. Secure Boot helps ensure that the system only runs authentic software, while Measured Boot gives a much more detailed picture about the integrity of the VM boot process and system software.

    With Shielded VMs, we aim to protect the system from the following attack vectors:

    • Malicious insiders within your organization: with Shielded VMs, malicious insiders within your organization can’t tamper with a guest VM image without those actions being logged. Nor can they alter sensitive crypto operations or easily exfiltrate secrets sealed with vTPM.

    • Guest system firmware via malicious guest firmware, including UEFI drivers.

    • Guest OS through malicious guest-VM kernel or user-mode vulnerabilities.

    Currently in beta, Shielded VMs are available for the following Google-curated images:

    • Windows Server 2012 R2 (Core and Datacenter)

    • Windows Server 2016 (Core and Datacenter)

    • Windows Server version 1709 Datacenter Core

    • Windows server version 1803 Datacenter Core

    • Container-Optimized OS 68+

    • Ubuntu 1804

    Container-Optimized OS is actively used in several GCP products including Google Kubernetes Engine, Cloud Machine Learning, Cloud Dataflow, and Cloud SQL.

    Now let’s talk about our hardening capabilities in more detail and explain how you can benefit from them.

    Boot basics

    As a refresher, here’s an overview of the current boot process on GCP:

    1. The Titan chip verifies that the production server has booted known system firmware.

    2. Assuming an uncompromised host OS BIOS, the Titan chip then verifies that the production server boots a Google-approved OS image.

    3. Assuming an uncompromised OS, it checks that the production server obtained its credentials and is ready to load the host OS with the KVM hypervisor.

    4. KVM then passes control to the VM instance’s UEFI firmware, which then configures the image properly, and loads the bootloader into system memory.

    5. The guest firmware boots and then passes execution control to the bootloader, which loads the initial Shielded OS image into system memory and passes the execution control to the guest operating system.

    6. The guest OS continues loading kernel drivers that are digitally signed and validates them using vTPM.

    Once those steps are complete, you have a fully loaded Shielded VMs up and running. During boot, Shielded VMs also implement Secure Boot and Measured Boot, and performs runtime boot integrity monitoring.

    Secure Boot

    With the increase in privileged software attacks and rootkits, more customers require Secure Boot and root-of-trust for their VMs. We enable UEFI Secure Boot via trusted firmware to help ensure the integrity of the system firmware and system boot loader. Secure Boot helps protect the system against malicious code from being loaded early in the boot sequence, by procuring control of the OS and masking its presence. With Secure Boot, we offer authenticated boot guest firmware and a bootloader along with cryptographically signed boot files. During every boot, Secure Boot makes sure that the UEFI firmware inspects EFI binaries for a valid digital signature, and verifies that the system firmware and system boot loader are signed with an authorized cryptographic key. If any component in the firmware is not properly signed or not signed at all, the boot process is stopped.  

    Measured Boot

    Working with the vTPM, the goal of Measured Boot is to ensure the integrity of the critical load path of boot and kernel drivers, offering protection against malicious modifications to the VM. This is accomplished by maintaining chain-of-trust measurements throughout the entire boot process, thus allowing vTPM to validate that kernel and system drivers have not been tampered with, or rolled back to signed-but-unpatched binaries, or load binaries out of order.  

    vTPM crypto processor

    Trusted Platform Module (TPM) devices have become the de-facto standardfor providing strong, low-cost cryptographic capabilities in modern computer systems. TPM adoption is on the rise, and many security scenarios take advantage of this capability. The goal of the vTPM service is to provide guest VM instances with TPM functionality that is TPM2.0 compatible and FIPS 140-2 L1 certified. We also exposed open-source GoLang TPM apis (go-TPM) to ensure that you will be able to use vTPMs easily to seal your secrets and keys, protecting them against exfiltration.  

    Integrity measurements

    Shielded VMs offer easy access to the integrity reports via Stackdriver, so you can obtain them for your own verification. We measure the integrity of your VMs based on the implicitly trusted default baseline, and enable you to update it. You can define your own policy and specify custom actions if the integrity report indicates that your VM does not meet your expected healthy baseline. You will be able to the Stackdriver Pub/Sub sync to associate custom actions with any integrity failures: e.g., you can stop a VM instance and export it for forensic investigation. Or, if you know why integrity checks failed, you can update the baseline to include these changes into a new baseline. From then on, we will be measuring the integrity state based on the new definition you provided.

    Getting started with Shielded VMs

    In the beta release, you can create a VM instance GCP console to give you more granular control over Shielded VMs functionality. By default, all options are enabled.

    GCP Workload Create Instance
    When you  create an instance with Shielded VMs configuration options, a shield icon next to the VM boot disk denotes that Shielded VMs are enabled.
    GCP Workload Boot Disk
    Boot disk selection with a Shielded VMs enabled image.

    You can adjust your Shielded VMs configuration options from the VM instance details page, including the option to enable or disable Secure Boot and vTPM, or enable/disable Integrity monitoring. You can also create instance templates that use Shielded VMs.

    GCP Workload Shielded VM Test
    VM instance details showing Secure Boot, vTPM and Integrity Monitoring enabled.

    You can also use gcloud APIs to manage Shielded VMs settings, including creation of new Shielded VMs, or enabling or disabling individual Shielded VMs features in an existing instance. To do so, we created dedicated security IAM roles, compute.instances.updateShieldedVmConfig and compute.instances.setShieldedVmIntegrityPolicy, which can be granted to custom IAM roles in addition to the default SecureAdmin IAM role. We want to ensure that highly privileged operations, like secure boot disabling or turning off vTPM or Integrity monitoring, can be done only by dedicated administrators using separation of duties principle in mind. These privileged operations are recorded with tamper-evident logs, and shared with our users so the actions can be monitored and audited.   

    Conclusion

    Every day we hear of new methods of exploiting vulnerabilities in computing systems. Fortunately, while the attackers get more sophisticated, we’re working hard to stay one step ahead of them. Shielded VMs UEFI firmware, Secure Boot, Measured Boot, vTPMs and Integrity Monitoring offer integrity verification and enforcement of your VM boot system, giving you confidence in your business-critical cloud workloads. To learn more, watch the screencast below, fill out this form to hear about upcoming releases, and sign up for the Shielded VMs discussion group.

    https://youtu.be/9cdWxvCPgyg

    Alexa now tells you when it can answer old questions

    The content below is taken from the original ( Alexa now tells you when it can answer old questions), to continue reading please visit the site. Remember to respect the Author & Copyright.

    When you ask a voice assistant a question it doesn't have an answer for, that's usually the end of the story unless you're determined to look up the answer on another device. Amazon doesn't think the mystery should go unsolved, though. It's trotting…

    How to disable Active Monitoring in CCleaner Free

    The content below is taken from the original ( How to disable Active Monitoring in CCleaner Free), to continue reading please visit the site. Remember to respect the Author & Copyright.

    In the recent CCleaner 5.45 version released by Piriform/Avast, it is difficult a new active monitoring feature has been introduced. It is almost impossible to disable the Active Monitoring feature. of CCleaner. This has invited angry reactions from the public over the flawed functioning of the ‘Active Monitoring’ feature. Even when you try to […]

    This post How to disable Active Monitoring in CCleaner Free is from TheWindowsClub.com.

    AWS IoT Device Defender Now Available – Keep Your Connected Devices Safe

    The content below is taken from the original ( AWS IoT Device Defender Now Available – Keep Your Connected Devices Safe), to continue reading please visit the site. Remember to respect the Author & Copyright.

    I was cleaning up my home office over the weekend and happened upon a network map that I created in 1997. Back then my fully wired network connected 5 PCs and two printers. Today, with all of my children grown up and out of the house, we are down to 2 PCs. However, our home mesh network is also host to 2 Raspberry Pis, some phones, a pair of tablets, another pair of TVs, a Nintendo 3DS (thanks, Eric and Ana), 4 or 5 Echo devices, several brands of security cameras, and random gadgets that I buy. I also have a guest network, temporary home to random phones and tablets, and to some of the devices that I don’t fully trust.

    This is, of course, a fairly meager collection compared to the typical office or factory, but I want to use it to point out some of the challenges that we all face as IoT devices become increasingly commonplace. I’m not a full-time system administrator. I set strong passwords and apply updates as I become aware of them, but security is always a concern.

    New AWS IoT Device Defender
    Today I would like to tell you about AWS IoT Device Defender. This new, fully-managed service (first announced at re:Invent) will help to keep your connected devices safe. It audits your device fleet, detects anomalous behavior, and recommends mitigations for any issues that it finds. It allows you to work at scale and in an environment that contains multiple types of devices.

    Device Defender audits the configuration of your IoT devices against recommended security best practices. The audits can be run on a schedule on or demand, and perform the following checks:

    Imperfect Configurations – The audit looks for expiring and revoked certificates, certificates that are shared by multiple devices, and duplicate client identifiers.

    AWS Issues – The audit looks for overly permissive permission IoT policies, Cognito Ids with overly permissive access, and ensures that logging is enabled.

    When issues are detected in the course of an audit, notifications can be delivered to the AWS IoT Console, as CloudWatch metrics, or as SNS notifications.

    On the detection side, Device Defender looks at network connections, outbound packet and byte counts, destination IP addresses, inbound and outbound message rates, authentication failures, and more. You can set up security profiles, define acceptable behavior, and configure whitelists and blacklists of IP addresses and ports. An agent on each device is responsible for collecting device metrics and sending them to Device Defender. Devices can send metrics at 5 minute to 48 hour intervals.

    Using AWS IoT Device Defender
    You can access Device Defender’s features from the AWS IoT Console, CLI, or via a full set of APIs. I’ll use the Console, as I usually do, starting at the Defend menu:

    The full set of available audit checks is available in Settings (any check that is enabled can be used as part of an audit):

    I can see my scheduled audits by clicking Audit and Schedules. Then I can click Create to schedule a new one, or to run one immediately:

    I create an audit by selecting the desired set of checks, and then save it for repeated use by clicking Create, or run it immediately:

    I can choose the desired recurrence:

    I can set desired day for a weekly audit, with similar options for the other recurrence frequencies. I also enter a name for my audit, and click Create (not shown in the screen shot):

    I can click Results to see the outcome of past audits:

    And I can click any audit to learn more:

    Device Defender allows me to create security profiles to describe the expected behavior for devices within a thing group (or for all devices). I click Detect and Security profiles to get started, and can see my profiles. Then I can click Create to make a new one:

    I enter a name and a description, and then model the expected behavior. In this case, I expect each device to send and receive less than 100K of network traffic per hour:

    I can choose to deliver alerts to an SNS topic (I’ll need to set up an IAM role if I do this):

    I can specify a behavior for all of my devices, or for those in specific thing groups:

    After setting it all up, I click Save to create my security profile:

    Next, I can click Violations to identify things that are in conflict with the behavior that is expected of them. The History tab lets me look back in time and examine past violations:

    I can also view a device’s history of violations:

    As you can see, Device Defender lets me know what is going on with my IoT devices, raises alarms when something suspicious occurs, and helps me to track down past issues, all from within the AWS Management Console.

    Available Now
    AWS IoT Device Defender is available today in the US East (N. Virginia), US West (Oregon), US East (Ohio), EU (Ireland), EU (Frankfurt), EU (London), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Sydney), and Asia Pacific (Seoul) Regions and you can start using it today. Pricing for audits is per-device, per-month; pricing for monitored datapoints is per datapoint, both with generous allocations in the AWS Free Tier (see the AWS IoT Device Defender page for more info).

    Jeff;

    High Efficiency, Open-Sourced MPPT Solar Charger

    The content below is taken from the original ( High Efficiency, Open-Sourced MPPT Solar Charger), to continue reading please visit the site. Remember to respect the Author & Copyright.

    A few years ago, [Lukas Fässler] needed a solar charge controller and made his own, which he has been improving ever since. The design is now mature, and the High Efficiency MPPT Solar Charger is full of features like data logging, boasts a 97% efficiency over a range of 1 to 75 Watts, and can be used as a standalone unit or incorporated as a module into other systems. One thing that became clear to [Lukas] during the process was that a highly efficient, feature-rich, open-sourced hardware solution for charge controllers just didn’t exist, at least not with the features he had in mind.

    Data logging and high efficiency are important for a charge controller, because batteries vary in their characteristics as they recharge and the power generated from things like solar panels varies under different conditions and loads. An MPPT (Maximum Point Power Tracking) charger is a smart unit optimized to handle all these changing conditions for maximum efficiency. We went into some detail on MPPT in the past, and after three years in development creating a modular and configurable design, [Lukas] hopes no one will have to re-invent the wheel when it comes to charge controllers.