Deploying Grafana for production deployments on Azure

The content below is taken from the original ( Deploying Grafana for production deployments on Azure), to continue reading please visit the site. Remember to respect the Author & Copyright.

This blog is co-authored by Nick Lopez, Technical Advisor at Microsoft.

Grafana is one of the popular and leading open source tools for visualizing time series metrics. Grafana has quickly become the preferred visualization tool of choice for developers and operations teams for monitoring server and application metrics. Grafana dashboards enable operation teams to quickly monitor and react to performance, availability, and overall health of the service. You can now also use it to monitor Azure services and applications by leveraging the Azure Monitor data source plugin, built by Grafana Labs. This plugin enables you to include all metrics from Azure Monitor and Application Insights in your Grafana dashboards. If you would like to quickly setup and test Grafana with Azure Monitor and Application Insights metrics, we recommend you refer to the Azure Monitor Documentation.

Grafana dashboard using Azure Monitor as a data source to display metrics for Contoso dev environment.

 

Grafana server image in Azure Marketplace provides a great QuickStart deployment experience. The image provisions a virtual machine (VM) with a pre-installed Grafana dashboard server, SQLite database  and the Azure plugin. The default setup with a single VM deployment is great for a proof of concept study and testing. For high availability of monitoring dashboards for your critical applications and services, it’s essential to think of high availability of Grafana deployments on Azure. The following is the proposed and proven architecture to setup Grafana for high availability and security on Azure.

Setting up Grafana for production deployments

Grafana high availability deployment architecture on Azure.

Grafana Labs recommends setting up a separate highly available shared MySQL server for setting up Grafana for high availability. The Azure Database for MySQL and MariaDB are managed relational database services based on the community edition of MySQL and the MariaDB database engine. The service provides high availability at no additional cost, predictable performance, elastic scalability, automated backups and enterprise grade security with secure sockets layer (SSL) support, encryption at rest, advanced threat protection, and VNet service endpoint support. Utilizing a remote configuration database with Azure Database for MySQL or Azure Database for MariaDB service allows for horizontal scalability and high availability of Grafana instances required for enterprise production deployments.

Leveraging Bitnami Multi-Tier Grafana templates for production deployments

Bitnami lets you deploy a multi-node, production ready Grafana solution from the Azure Marketplace with just a few clicks. This solution uses several Grafana nodes with a pre-configured load balancer and Azure Database for MariaDB for data storage. The number of nodes can be chosen at deployment time depending on your requirements. Communication between the nodes and the Azure Database for MariaDB service is also encrypted with SSL to ensure security.

A key feature of Bitnami’s Grafana solution is that it comes pre-configured to provide a fault-tolerant deployment. Requests are handled by the load balancer, which continuously tests nodes to check if they are alive and automatically reroutes requests if a node fails. Data (including session data) is stored in the Azure Database for MariaDB and not on the individual nodes. This approach improves performance and protects against data loss due to node failure.

For new deployments, you can launch Bitnami Grafana Multi-Tier through the Azure Marketplace!

Configuring existing installations of Grafana to use Azure Database for MySQL service

If you have an existing installation of Grafana that you would like to configure for high availability, you can use the following steps that demonstrate configuring Grafana instance to use Azure Database for MySQL server as the backend configuration database. In this walkthrough, we will be using an example of Ubuntu with Grafana installed and configure Azure Database for MySQL as a remote database for Grafana setup.

  1. Create an Azure Database for MySQL server with the General Purpose tier which is recommended for production deployments. If you are not familiar with the database server creation, you can read the QuickStart tutorial to familiarize yourself with the workflow. If you are using Azure CLI, you can simply set it up using az mysql up.
  2. If you have already installed Grafana on the Ubuntu server, you’ll need to edit the grafana.ini file to add the Azure Database for MySQL parameters. As per the Grafana documentation on the Database settings, we will focus on the database parameters noted in the documentation. Please note: The username must be in the format user@server due to the server identification method of Azure Database for MySQL. Other formats will cause connections to fail.
  3. Azure Database for MySQL supports SSL connections. For enterprise production deployments, it is recommended to always enforce SSL. Additional information around setting up SSL with Azure Database for MySQL can be found in the Azure Database for MySQL documentation. Most modern installations of Ubuntu will have the necessary Baltimore Cyber Trust CA certificate already installed in your /etc/ssl/certs location. If needed, you can download the SSL Certificate CA used for Azure Database for MySQL from  this location. The SSL mode can be provided in two forms, skip-verify and true. With skip-verify we will not validate the certificate provided but the connection is still encrypted. With true we are going to ensure that the certificate provided is validated   by the Baltimore CA. This is useful for preventing “man in the middle” attacks. Note that in both situations, Grafana expects the certificate authority (CA) path to be provided.
  4. Next, you have the option to store the sessions of users in the Azure DB for MySQL in the table session. This is configured in the same grafana.ini under the session section. This is beneficial for instance in situations where you have load balanced environments to maintain sessions for users accessing Grafana. In the provider_config parameter, we need to include the user@server, password, full server and the TLS/SSL method. In this manner, this can be true or ssl-verify. Note that this is the go-sql-driver/mysql driver where more documentation is available.
  5. After this is all set, you should be able to start Grafana and verify the status with the commands below:
  • systemctl start grafana-server
  • systemctl status grafana-server

If you see any errors or issues, the default path for logging is /var/log/grafana/ where you can confirm what is preventing the startup. The following is a sample error where the username was not provided as user@server but rather just user.

lvl=eror msg="Server shutdown" logger=server reason="Service init failed: Migration failed err: Error 9999: An internal error has occurred. Please retry or report your issues.

Otherwise you should see the service in an Ok status and the initial startup will build all the necessary tables in the Azure DB for MySQL database.

Key takeaways

  • The single VM setup for Grafana is great for quick start, testing and a proof of concept study but it may not be suitable for production deployments.
  • For enterprise production deployments of Grafana, separating the configuration database to the dedicated server enables high availability and scalability.
  • The Bitnami Grafana Multi-Tier template provides production ready template leveraging the scale out design and security to provision Grafana with a few clicks with no extra cost.
  • Using managed database services like Azure Database for MySQL for production deployments provides built-in high availability, scalability, and enterprise security for the database repository.

Additional resources

Get started with Bitnami Multi-Tier Solutions on Microsoft Azure

Monitor Azure services and applications using Grafana

Monitor your Azure services in Grafana

Setting up Grafana for high availability

Azure Database for MySQL documentation

Acknowledgments

Special thanks to Shau Phang, Diana Putnam, Anitah Cantele and Bitnami team for their contributions to the blog post.

The Wide World of Microsoft Windows on AWS

The content below is taken from the original ( The Wide World of Microsoft Windows on AWS), to continue reading please visit the site. Remember to respect the Author & Copyright.

You have been able to run Microsoft Windows on AWS since 2008 (my ancient post, Big Day for Amazon EC2: Production, SLA, Windows, and 4 New Capabilities, shows you just how far AWS come in a little over a decade). According to IDC, AWS has nearly twice as many Windows Server instances in the cloud as the next largest cloud provider.

Today, we believe that AWS is the best place to run Windows and Windows applications in the cloud. You can run the full Windows stack on AWS, including Active Directory, SQL Server, and System Center, while taking advantage of 61 Availability Zones across 20 AWS Regions. You can run existing .NET applications and you can use Visual Studio or VS Code build new, cloud-native Windows applications using the AWS SDK for .NET.

Wide World of Windows
Starting from this amazing diagram drawn by my colleague Jerry Hargrove, I’d like to explore the Windows-on-AWS ecosystem in detail:

1 – SQL Server Upgrades
AWS provides first-class support for SQL Server, encompassing all four Editions (Express, Web, Standard, and Enterprise), with multiple version of each edition. This wide-ranging support has helped SQL Server to become one of the most popular Windows workloads on AWS.

The SQL Server Upgrade Tool (an AWS Systems Manager script) makes it easy for you to upgrade an EC2 instance that is running SQL Server 2008 R2 SP3 to SQL Server 2016. The tool creates an AMI from a running instance, upgrades the AMI to SQL Server 2016, and launches the new AMI. To learn more, read about the AWSEC2-CloneInstanceAndUpgradeSQLServer action.

Amazon RDS makes it easy for you to upgrade your DB Instances to new major or minor upgrades to SQL Server. The upgrade is performed in-place, and can be initiated with a couple of clicks. For example, if you are currently running SQL Server 2014, you have the following upgrades available:

You can also opt-in to automatic upgrades to new minor versions that take place within your preferred maintenance window:

Before you upgrade a production DB Instance, you can create a snapshot backup, use it to create a test DB Instance, upgrade that instance to the desired new version, and perform acceptance testing. To learn more, about upgrades, read Upgrading the Microsoft SQL Server DB Engine.

2 – SQL Server on Linux
If your organization prefers Linux, you can run SQL Server on Ubuntu, Amazon Linux 2, or Red Hat Enterprise Linux using our License Included (LI) Amazon Machine Images. Read the most recent launch announcement or search for the AMIs in AWS Marketplace using the EC2 Launch Instance Wizard:

This is a very cost-effective option since you do not need to pay for Windows licenses.

You can use the new re-platforming tool (another AWS Systems Manager script) to move your existing SQL Server databases (2008 and above, either in the cloud or on-premises) from Windows to Linux.

3 – Always-On Availability Groups (Amazon RDS for SQL Server)
If you are running enterprise-grade production workloads on Amazon RDS (our managed database service), you should definitely enable this feature! It enhances availability and durability by replicating your database between two AWS Availability Zones, with a primary instance in one and a hot standby in another, with fast, automatic failover in the event of planned maintenance or a service disruption. You can enable this option for an existing DB Instance, and you can also specify it when you create a new one:

To learn more, read Multi-AZ Deployments Using Microsoft SQL Mirroring or Always On.

4 – Lambda Support
Let’s talk about some features for developers!

Launched in 2014, and the subject of continuous innovation ever since, AWS Lambda lets you run code in the cloud without having to own, manage, or even think about servers. You can choose from several .NET Core runtimes for your Lambda functions, and then write your code in either C# or PowerShell:

To learn more, read Working with C# and Working with PowerShell in the AWS Lambda Developer Guide. Your code has access to the full set of AWS services, and can make use of the AWS SDK for .NET; read the Developing .NET Core AWS Lambda Functions post for more info.

5 – CDK for .NET (Developer Preview)
The Developer Preview of the AWS CDK (Cloud Development Kit) for .NET lets you define your cloud infrastructure as code and then deploy it using AWS CloudFormation. For example, this code (stolen from this post) will generate a template that creates an Amazon Simple Queue Service (SQS) queue and an Amazon Simple Notification Service (SNS) topic:

var queue = new Queue(this, "MyFirstQueue", new QueueProps
{
    VisibilityTimeoutSec = 300
}
var topic = new Topic(this, "MyFirstTopic", new TopicProps
{
    DisplayName = "My First Topic Yeah"
});

6 – EC2 AMIs for .NET Core
If you are building Linux applications that make use of .NET Core, you can use use our Amazon Linux 2 and Ubuntu AMIs. With .NET Core, PowerShell Core, and the AWS Command Line Interface (CLI) preinstalled, you’ll be up and running— and ready to deploy applications—in minutes. You can find the AMIs by searching for core when you launch an EC2 instance:

7 – .NET Dev Center
The AWS .Net Dev Center contains materials that will help you to learn how design, build, and run .NET Applications on AWS. You’ll find articles, sample code, 10-minute tutorials, projects, and lots more:

8 – AWS License Manager
We want to help you to manage and optimize your Windows and SQL Server applications in new ways. For example,  AWS License Manager helps you to manage the licenses for the software that you run in the cloud or on-premises (read my post, New AWS License Manager – Manage Software Licenses and Enforce Licensing Rules, to learn more). You can create custom rules that emulate those in your licensing agreements, and enforce them when an EC2 instance is launched:

The License Manager also provides you with information on license utilization so that you can fine-tune your license portfolio, possibly saving some money in the process!

9 – Import, Export, and Migration
You have lots of options and choices when it comes to moving your code and data into and out of AWS. Here’s a very brief summary:

TSO Logic – This new member of the AWS family (we acquired the company earlier this year) offers an analytics solution that helps you to plan, optimize, and save money as you make your journey to the cloud.

VM Import/Export – This service allows you to import existing virtual machine images to EC2 instances, and export them back to your on-premises environment. Read Importing a VM as an Image Using VM Import/Export to learn more.

AWS Snowball – This service lets you move petabyte scale data sets into and out of AWS. If you are at exabyte scale, check out the AWS Snowmobile.

AWS Migration Acceleration Program – This program encompasses AWS Professional Services and teams from our partners. It is based on a three step migration model that includes a readiness assessment, a planning phase, and the actual migration.

10 – 21st Century Applications
AWS gives you a full-featured, rock-solid foundation and a rich set of services so that you can build tomorrow’s applications today! You can go serverless with the .NET Core support in Lambda, make use of our Deep Learning AMIs for Windows, host containerized apps on Amazon ECS, AWS Fargate, or Amazon EKS, and write code that makes use of the latest AI-powered services. Your applications can make use of recommendations, forecasting, image analysis, video analysis, text analytics, document analysis, text to speech, translation, transcription, and more.

11 – AWS Integration
Your existing Windows Applications, both cloud-based and on-premises, can make use of Windows file system and directory services within AWS:

Amazon FSx for Windows Server – This fully managed native Windows file system is compatible with the SMB protocol and NTFS. It provides shared file storage for Windows applications, backed by SSD storage for fast & reliable performance. To learn more, read my blog post.

AWS Directory Service – Your directory-aware workloads and AWS Enterprise IT applications can use this managed Active Directory that runs in the AWS Cloud.

Join our Team
If you would like to build, manage, or market new AWS offerings for the Windows market, be sure to check out our current openings. Here’s a sampling:

Senior Digital Campaign Marketing Manager – Own the digital tactics for product awareness and run adoption campaigns.

Senior Product Marketing Manager – Drive communications and marketing, create compelling content, and build awareness.

Developer Advocate – Drive adoption and community engagement for SQL Server on EC2.

Learn More
Our freshly updated Windows on AWS and SQL Server on AWS pages contain case studies, quick starts, and lots of other useful information.

Jeff;

How Blockchain Will Help Banks Tap New Markets in MENA

The content below is taken from the original ( How Blockchain Will Help Banks Tap New Markets in MENA), to continue reading please visit the site. Remember to respect the Author & Copyright.

One of the most striking aspects of the oil-rich Gulf states are their large migrant populations: 88 percent of people in the United Arab Emirates (UAE), 75 percent in Qatar and 74 percent in Kuwait are foreign-born. The majority of these immigrants are workers on temporary visas to do jobs that their predominantly wealthy hosts won’t do. Many leave their families behind but remain the primary source of income for those dependents back home.

The Gulf states are largely cash economies. This is good for migrants whose temporary and low-paid status makes it difficult and expensive to open and maintain bank accounts. But lack of access to regular banking facilities forces them to use slow and expensive cross-border remittance services when it’s time to send money home.

Cross-border cash transfers can eat up as much as 9 percent of the amount sent, which makes it as profitable for the service providers as it is a bad deal for customers. It’s no surprise that fintech companies are using blockchain technology, mobile devices, social network plug-ins and chat services to disrupt the existing remittances process.

Blockchain Is Disrupting Traditional Remittances and Banks

According to the recently published “Remittance Market & Blockchain Technology” report by Blockdata, blockchain-based transactions are on average 388 times faster and 127 times cheaper than traditional remittances. By slashing the usual five-day process to a matter of minutes, while leaving users with significantly more money in their pockets, blockchain services are attracting the attention of migrants and native-born customers in the Gulf and beyond.

UAE-based mobile payments service Beam Wallet has already reached one-sixth of the country’s population in just two years. The company has processed more than $250 million of small value payments for groceries, cups of coffee and even fuel for their cars.

While traditional banks are often uninterested in serving these customers, Beam is demonstrating the potential of transforming cash-based economies into transaction-fee generators with fast and user-friendly mobile payment services. Cross-border remittances are an even bigger opportunity.

Replacing an Inefficient and Expensive Process

In 2018, the remittance market to developing countries was $528 billion, while the global market is expected to rise to $715 billion in 2019. This potential source of value is not confined to the Middle East. The US remains the top remittance-sending country in the world, while people in Germany and Switzerland sent nearly $50 billion across borders in 2017.

The current cross-border remittance process for banks involves dealing with a complicated correspondent banking network, tying up capital in prefunded nostro accounts and using outdated and expensive SWIFT technology. It means that enabling a migrant worker to send $100 to her family is not a priority for banks, no matter how big the overall market.

If the process was more effective and scalable, banks could easily unlock access to billions of dollars of new revenues. Blockchain technology enables direct connection to a recipient bank, which dramatically reduces the cost and increases the speed of processing, while removing the need for pre-funded accounts. All a bank has to do is tap into these new networks, which already exist and are growing every day.

Generating Greater Lifetime Value

Expanding their foothold in remittances is just the beginning for banks. Migrants and low-income households with access to bank accounts often become wealthier. One study shows that families given a savings accounts had 25 percent more monetary assets after a year than those without one. In Kenya, using mobile services to manage their money instead of cash helped 185,000 women switch from subsistence agriculture to business jobs with better prospects.

More financial stability allows people to become candidates for additional financial products and services like loans and credit cards, boosting their lifetime value for banks. Cross-border remittances can act as the banks’ entry strategy to winning new customers and becoming a top-of-wallet service that will turn a single-payment customer into a multi-transactional money-spinner.  

Early Blockchain Adopters Will Thrive

Migrant communities—whether in the Middle East, the US or Europe—are typically tight-knit. The products and services that made their lives easier and better are passed on by word of mouth and become the go-to option across the community. Innovative new blockchain services have already raised customer expectations about how long remittances take, how much they cost and how easy they are to carry out.

If traditional banks do not move fast enough and become part of the blockchain revolution, they will be left behind. Forward-thinking banks who act to provide all customers with the kind of accessible, user-friendly, cheap, fast and transparent experiences they find elsewhere, will open up new markets, reach more customers and generate greater value for everyone.

About the Author

Roel Wolfert is Managing Partner at VGRIP and has more than 20 years of international experience on the edge of business and technology in retail & transactional banking, payments, cards, management consulting, IT and venture capital.

The post How Blockchain Will Help Banks Tap New Markets in MENA appeared first on Ripple.

Making Google Cloud the best place to run your Microsoft Windows applications

The content below is taken from the original ( Making Google Cloud the best place to run your Microsoft Windows applications), to continue reading please visit the site. Remember to respect the Author & Copyright.

Since launching Google Compute Engine in 2015, we have been focused on building a cloud on which you can run all your applications—including Microsoft applications. Today, we’re excited to announce new features and services to help your Windows workloads take advantage of GCP’s leading infrastructure, data analytics and open-source innovations.

These enhancements make it a great time to start moving your enterprise Windows workloads to Google Cloud and take advantage of its leading performance and technological innovation, while preserving investments in software licenses. Our infrastructure becomes your infrastructure, and our innovation becomes your innovation.

Bring your own licenses
For your Microsoft workloads, in addition to purchasing on-demand licenses from Google Cloud, you now have the flexibility to bring your existing licenses to GCP. Using sole-tenant nodes you can launch your existing Windows workloads onto physical Compute Engine servers dedicated exclusively to you. With sole tenants, you have visibility into the underlying characteristics of your machine to determine license usage for your reporting and compliance needs.

A seamless migration
We’re also making it easier for you to migrate Microsoft workloads into GCP. Velostrata, our streaming migration tool, will be updated in a couple of weeks to give you the ability to specifically tag Microsoft workloads that require sole tenancy, and to automatically apply existing licenses. This is in addition to all the other Velostrata migration capabilities, such as built-in testing, instance rightsizing recommendations, post-migration rollback when needed, and booting apps in the cloud in as little as a few minutes. Best of all, you can take advantage of Velostrata to migrate your VMs or servers at no cost.

Active Directory, made easier
But just because you’ve decided to migrate Microsoft workloads into GCP doesn’t mean you need to do everything all at once. If you use Microsoft Active Directory (AD) to manage users and access to traditional applications, you’ll soon be able to use Managed Service for Microsoft Active Directory (AD), a highly available, hardened Google Cloud service running Microsoft AD, that helps you to manage your cloud-based AD-dependent workloads, automate AD server maintenance and security configuration, and connect your on-premises AD domain to the managed service.

Managed Service for Microsoft AD admin experience.png
Managed Service for Microsoft AD admin experience

Sign up to be notified when Managed Service for Microsoft AD becomes available in beta, and learn more about other identity and access management enhancements that we announced this week.

Managing Microsoft SQL Server for you
We want to make running Microsoft applications on GCP as easy for you as possible. As such, we’ve expanded Cloud SQL, our fully managed relational database server, to support Microsoft SQL Server. Currently in alpha, you can now choose to run SQL Server yourself on Google Compute Engine, or let us manage it for you on Cloud SQL, where we’ll take care of backups, replication, patches and updates.

Your future on Google Cloud
IT organizations comes in all shapes and sizes, and there’s no end to the variety of the workloads you run. Whatever your environment looks like, we believe that it can benefit from the Google Cloud’s advanced infrastructure and innovation. Click here to learn more about running your Windows workloads on GCP.

Understanding GCP service accounts: three common use-cases

The content below is taken from the original ( Understanding GCP service accounts: three common use-cases), to continue reading please visit the site. Remember to respect the Author & Copyright.

If you’re building applications on Google Cloud Platform (GCP), you’re probably familiar with the concept of a service account, a special Google account that belongs to your application or a virtual machine, and which can be treated as an identity and as a resource. Depending on your use case, there are different ways to manage service accounts and to give them access to resources. In this post we will look at some of those common use cases, and help you determine the appropriate operational model for managing your service accounts.

Use case 1: Web application accessing GCP resources

Web application accessing GCP resources.png

Imagine your users are accessing a web app to which they are authorized via Cloud Identity-Aware Proxy (IAP). They do not require direct access to the underlying GCP resources—just to the web app that utilizes the GCP resources. The web app uses a service account to gain permissions to access GCP services, for example, Datastore. In this case the service account has a 1:1 map to the web app—it’s the identity of the web app. To get started, you create the service account in the GCP project that hosts the web application, and you grant the permissions your app needs to access GCP resources to the service account. Finally, configure your app to use the service account credentials.

Use case 2: Cross-charging BigQuery usage to different cost centers

Cross-charging BigQuery usage to different cost centers.png

In this scenario, departmental users query a shared BigQuery dataset using a custom-built application. Because the queries must be cross-charged to the users’ cost center, the application runs on a VM with a service account that has the appropriate permissions to make queries against the BigQuery dataset.

Each department has a set of projects that are labelled such that the resources used in that project appear in the billing exports. Each department also has to run the application from their assigned project so that the queries run against BigQuery can be appropriately cross-charged.

To configure this for each of the departments’ projects, in each of the projects executing the queries, assign the IAM permissions required to run queries against the BigQuery datasets to the application’s service account.

For more information on configuring the permissions for this scenario, see this resource.

Use case 3: Managing service accounts used for operational and admin activities

As a system administrator or operator responsible for managing a GCP environment, you want to centrally manage common operations such as provisioning environments, auditing, etc., throughout your GCP environment.

Managing service accounts used for operational and admin activities.png

In this case, you’ll need to create a variety of service accounts with the appropriate permissions to enable various tasks. These service accounts are likely to have elevated privileges and have permissions granted at the appropriate level in the hierarchy. And like for all service accounts, you need them to follow best practices to prevent them from being exposed to unauthorized users. For example, you should add a project lien to the projects where these operational service accounts are created to help prevent them from being accidentally deleted.

Crazy for service accounts

As you can see from the use cases discussed above, one model does not fit all and you will need to adopt the appropriate operational model to fit your use case. We hope walking through these use cases helps you to think about where you logically should place your service accounts. To learn more about service accounts, try one of the following tutorials to see how to use service account credentials with the GCP compute service of your choice:

Google partners with Intel, HPE and Lenovo for hybrid cloud

The content below is taken from the original ( Google partners with Intel, HPE and Lenovo for hybrid cloud), to continue reading please visit the site. Remember to respect the Author & Copyright.

Still struggling to get its Google Cloud business out of single-digit marketshare, Google this week introduced new partnerships with Lenovo and Intel to help bolster its hybrid cloud offerings, both built on Google’s Kubernetes container technology.

At Google’s Next ’19 show this week, Intel and Google said they will collaborate on Google’s Anthos, a new reference design based on the second-Generation Xeon Scalable processor introduced last week and an optimized Kubernetes software stack designed to deliver increased workload portability between public and private cloud environments.

As part the Anthos announcement, Hewlett Packard Enterprise (HPE) said it has validated Anthos on its ProLiant servers, while Lenovo has done the same for its ThinkAgile platform. This solution will enable customers to get a consistent Kubernetes experience between Google Cloud and their on-premises HPE or Lenovo servers. No official word from Dell yet, but they can’t be far behind.

To read this article in full, please click here

Duck And Cover With This WiFi “Geiger Counter”

The content below is taken from the original ( Duck And Cover With This WiFi “Geiger Counter”), to continue reading please visit the site. Remember to respect the Author & Copyright.

There’s perhaps no sound more recognizable than the frantic clicking of a Geiger counter. Not because this is some post-apocalyptic world in which everyone is personally acquainted with the operation of said devices, but because it’s such a common effect used in many movies, TV shows, and video games. If somebody hears that noise, even if it doesn’t really make sense in context, they know things are about to get serious.

Capitalizing on this phenomena, [Anton Haidai] has put together a quick hack which turns the ESP8266 into a “Geiger counter” for WiFi. Rather than detecting radiation, the gadget picks up on the strongest nearby WiFi signal and will start clicking in response to signal strength. As the signal gets stronger, so does the clicking. While primarily a novelty, it’s an interesting idea that could potentially be useful for things like fox hunting.

The hardware is really about as simple as it gets, just a basic buzzer attached to one of the digital pins on a NodeMCU development board. This project is more of a proof of concept, but if it were to be developed further it would be interesting to see the electronics placed into a 3D printed replica of one of the old Civil Defense Geiger counters. Perhaps even integrating an analog gauge that can bounce around in response to signal strength.

Software-wise there is the option of locking onto one single network SSID or allowing the device to find the strongest network in the area. Even if you’re not in the market for a chirping WiFi detector, the code is a good example of how you can detect signal RSSI and act on it accordingly; a neat trick which might come in handy in a future project.

If you’re more interested in the real thing, we’ve got plenty of DIY Geiger counters in the archive for you to check out. From diminutive builds that can be mounted to the top of a 9V battery to high-tech solid state versions with touch screen interfaces, you should have plenty of inspiration if you’re looking to kit yourself out before your next drive through the Chernobyl Exclusion Zone.

Add Windows Defender Options as Cascading Right-Click Menu in Desktop

The content below is taken from the original ( Add Windows Defender Options as Cascading Right-Click Menu in Desktop), to continue reading please visit the site. Remember to respect the Author & Copyright.

@Echo Off Cls & Color 1A Cd %systemroot%\system32 :: Add Windows Defender Options as Cascading Right-Click Menu in Desktop REM --> Check for permissions Reg query "HKU\S-1-5-19\Environment" REM --> If error flag set, we do not have admin. if %errorlevel% NEQ 0 ( ECHO ************************************** ECHO Running Admin shell... Please wait... ECHO ************************************** goto UACPrompt ) else ( goto gotAdmin ) :UACPrompt echo Set UAC = CreateObject^("Shell.Application"^) > "%temp%\getadmin.vbs" set params = "%*:"="" echo UAC.ShellExecute "cmd.exe", "/c ""%~s0"" %params%", "", "runas", 1 >> "%temp%\getadmin.vbs" "%temp%\getadmin.vbs" del "%temp%\getadmin.vbs" exit /B :gotAdmin Cls & Mode CON LINES=11 COLS=80 & Color 0D & Title Created By FreeBooter Echo. Echo. Echo. Echo Add Cascading Windows Defender Options in Desktop Context Menu (Y/N)? Echo. Echo. Echo. Set /p input= RESPONSE: If /i Not %input%==Y (Goto :_Ex) Else (Goto :_Start) :_Ex If /i Not %input%==N (Goto :EOF) Else (Goto :_RegRestore) :_Start Reg.exe add "HKCR\DesktopBackground\Shell\WindowsDefender" /v "Icon" /t REG_SZ /d "C:\Program Files\Windows Defender\EppManifest.dll" /f > Nul Reg.exe add "HKCR\DesktopBackground\Shell\WindowsDefender" /v "SubCommands" /t REG_SZ /d "WD-Open;WD-Settings;WD-Update;WD-QuickScan;WD-FullScan" /f > Nul Reg.exe add "HKCR\DesktopBackground\Shell\WindowsDefender" /v "Muiverb" /t REG_SZ /d "Windows Defender" /f > Nul Reg.exe add "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\CommandStore\shell\WD-FullScan" /ve /t REG_SZ /d "Full Scan" /f > Nul Reg.exe add "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\CommandStore\shell\WD-FullScan" /v "Icon" /t REG_SZ /d "C:\Program Files\Windows Defender\EppManifest.dll" /f > Nul Reg.exe add "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\CommandStore\shell\WD-FullScan\command" /ve /t REG_SZ /d "\"C:\Program Files\Windows Defender\MSASCui.exe\" -FullScan" /f > Nul Reg.exe add "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\CommandStore\shell\WD-Open" /ve /t REG_SZ /d "Open" /f > Nul Reg.exe add "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\CommandStore\shell\WD-Open" /v "Icon" /t REG_SZ /d "C:\Program Files\Windows Defender\EppManifest.dll" /f > Nul Reg.exe add "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\CommandStore\shell\WD-Open\command" /ve /t REG_SZ /d "C:\Program Files\Windows Defender\MSASCui.exe" /f > Nul Reg.exe add "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\CommandStore\shell\WD-QuickScan" /ve /t REG_SZ /d "Quick Scan" /f > Nul Reg.exe add "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\CommandStore\shell\WD-QuickScan" /v "Icon" /t REG_SZ /d "C:\Program Files\Windows Defender\EppManifest.dll" /f > Nul Reg.exe add "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\CommandStore\shell\WD-QuickScan\command" /ve /t REG_SZ /d "\"C:\Program Files\Windows Defender\MSASCui.exe\" -QuickScan" /f > Nul Reg.exe add "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\CommandStore\shell\WD-Settings" /ve /t REG_SZ /d "Settings" /f > Nul Reg.exe add "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\CommandStore\shell\WD-Settings" /v "Icon" /t REG_SZ /d "C:\Program Files\Windows Defender\EppManifest.dll" /f > Nul Reg.exe add "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\CommandStore\shell\WD-Settings\command" /ve /t REG_SZ /d "explorer.exe ms-settings:" /f > Nul Reg.exe add "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\CommandStore\shell\WD-Update" /ve /t REG_SZ /d "Update" /f > Nul Reg.exe add "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\CommandStore\shell\WD-Update" /v "Icon" /t REG_SZ /d "C:\Program Files\Windows Defender\EppManifest.dll" /f > Nul Reg.exe add "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\CommandStore\shell\WD-Update\command" /ve /t REG_SZ /d "\"C:\Program Files\Windows Defender\MSASCui.exe\" -Update" /f > Nul Cls & Mode CON LINES=11 COLS=60 & Color 0D & Title Created By FreeBooter Echo. Echo. Echo. Echo. Echo Adding Cascading Windows Defender Options Echo. Echo. Echo. Ping -n 6 localhost >Nul Exit :_RegRestore Reg.exe delete "HKCR\DesktopBackground\Shell\WindowsDefender" /f > Nul Reg.exe delete "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\CommandStore\shell\WD-FullScan" /f > Nul Reg.exe delete "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\CommandStore\shell\WD-Open" /f > Nul Reg.exe delete "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\CommandStore\shell\WD-QuickScan" /f > Nul Reg.exe delete "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\CommandStore\shell\WD-Settings" /f > Nul Reg.exe delete "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\CommandStore\shell\WD-Update" /f > Nul Cls & Mode CON LINES=11 COLS=60 & Color 0D & Title Created By FreeBooter Echo. Echo. Echo. Echo. Echo Removing Cascading Windows Defender Options Echo. Echo. Echo. Ping -n 6 localhost >Nul Exit 

submitted by /u/FreeBooter_ to r/usefulscripts
[link] [comments]

[PowerShell] Dashimo – Conditional Formatting for HTML Tables and more

The content below is taken from the original ( [PowerShell] Dashimo – Conditional Formatting for HTML Tables and more), to continue reading please visit the site. Remember to respect the Author & Copyright.

Hi guys,

After a few days of work, I’m releasing an updated version of Dashimo.

New blog post with examples/screenshots and how to: https://evotec.xyz/dashimo-easy-table-conditional-formatting-and-more/

If you never have seen this before: https://evotec.xyz/meet-dashimo-powershell-generated-dashboard/ is an overview of what Dashimo is.

What’s new:

  • conditional formatting
  • more exposed parameters to Table
  • description of Autorefresh
  • Show parameter for dashboard

Conditional formatting in action…

$Process = Get-Process | Select-Object -First 30 Dashboard -Name 'Dashimo Test' -FilePath $PSScriptRoot\DashboardSimplestTableConditions.html -Show { Table -DataTable $Process -HideFooter { TableConditionalFormatting -Name 'ID' -ComparisonType number -Operator gt -Value 10000 -Color BlueViolet -Row TableConditionalFormatting -Name 'Name' -ComparisonType string -Operator eq -Value 'chrome' -Color White -BackgroundColor Crimson -Row TableConditionalFormatting -Name 'PriorityClass' -ComparisonType string -Operator eq -Value 'Idle' -Color White -BackgroundColor Green } } 

Easy example:

$Process = Get-Process | Select-Object -First 30 Dashboard -Name 'Dashimo Test' -FilePath $PSScriptRoot\DashboardSimplestTable.html -AutoRefresh 15 -Show { Table -DataTable $Process -DefaultSortIndex 4 -ScrollCollapse -HideFooter -Buttons @() } 

Complicated, still easy example:

$Process = Get-Process | Select-Object -First 30 $Process1 = Get-Process | Select-Object -First 5 $Process2 = Get-Process | Select-Object -First 10 $Process3 = Get-Process | Select-Object -First 10 Dashboard -Name 'Dashimo Test' -FilePath $PSScriptRoot\DashboardEasy.html -Show { Tab -Name 'First tab' { Section -Name 'Test' { Table -DataTable $Process } Section -Name 'Test2' { Panel { Table -DataTable $Process1 } Panel { Table -DataTable $Process1 } } Section -Name 'Test3' { Table -DataTable $Process -DefaultSortColumn 'Id' } } Tab -Name 'second tab' { Panel { Table -DataTable $Process2 } Panel { Table -DataTable $Process2 } Panel { Table -DataTable $Process3 -DefaultSortIndex 4 } } } 

Enjoy and hope you like this one.

submitted by /u/MadBoyEvo to r/usefulscripts
[link] [comments]

Windows 10 v1903 April 2019 Update New Features List

The content below is taken from the original ( Windows 10 v1903 April 2019 Update New Features List), to continue reading please visit the site. Remember to respect the Author & Copyright.

New Features in Windows 10 v1903

New Features in Windows 10 v1903Microsoft is almost ready to roll out Windows 10 v1903 feature update. This update will bring some significant changes, security enhancements, and UI improvement. This release is also called as 19H1 19h1 or Windows 10 April 2019 20o9 Update, Here is the[…]

This post Windows 10 v1903 April 2019 Update New Features List is from TheWindowsClub.com.

Data center fiber to jump to 800 gigabits in 2019

The content below is taken from the original ( Data center fiber to jump to 800 gigabits in 2019), to continue reading please visit the site. Remember to respect the Author & Copyright.

The upper limits on fiber capacity haven’t been reached just yet. Two announcements made around an optical-fiber conference and trade show in San Diego recently indicate continued progress in squeezing more data into fiber.

In the first announcement, researchers say they’ve obtained 26.2 terabits per second over the roughly 4,000 mile-long trans-Atlantic MAREA cable, in an experiment; and in the second, networking company Ciena says it will start deliveries of an 800 gigabit-per-second, single wavelength light throughput system in Q3 2019.

High-speed laser

MAREA, translated as “tide” in Spanish, is the Telefónica-operated cable running between Virginia Beach, Va., and Bilbao in Spain. The fiber cable, initiated a year ago, is designed to handle 160 terabits of data per second through its eight 20-terabit pairs. Each one of those pairs is thus big enough to carry 4 million high-definition videos at the same time, network-provider Infinera explains in an Optical Fiber Conference and Exhibition published press release.

To read this article in full, please click here

The Morning After: Lamborghini supercomputer

The content below is taken from the original ( The Morning After: Lamborghini supercomputer), to continue reading please visit the site. Remember to respect the Author & Copyright.

Hey, good morning! You look fabulous.

This morning, we're mulling over the real problem with mobile gaming and taking a seat behind the wheel of a Huracán for some computer-aided drifting. Also, Anthem got a big update, and Samsung's mid-rang…

[PowerShell] Backing up Bitlocker Keys and LAPS passwords from Active Directory

The content below is taken from the original ( [PowerShell] Backing up Bitlocker Keys and LAPS passwords from Active Directory), to continue reading please visit the site. Remember to respect the Author & Copyright.

submitted by /u/MadBoyEvo to r/usefulscripts
[link] [comments]

NEW: Custom Hands-On Labs for Azure and Google Cloud Platform

The content below is taken from the original ( NEW: Custom Hands-On Labs for Azure and Google Cloud Platform), to continue reading please visit the site. Remember to respect the Author & Copyright.

Harvard Business Review recently estimated that some 90% of corporate training never gets applied on the job. Given the $200B training industry, that is a staggering amount of waste. One reason for the disconnect? Lack of context.

Cloud Academy’s platform was built to make it extraordinarily easy for organizations to add context to our out-of-the-box training library. Using Content Engine™, hundreds of enterprise organizations have extended Cloud Academy Learning Paths with Custom Resources, published assignable Training Plans, and built their own certification Exams to test staff on the subjects essential to their specific job role.

Just a few weeks ago, we announced Custom Hands-on Labs for Amazon Web Services, enabling managers and the cloud center of excellence to quickly build completely custom Labs that are highly specific to their teams, needs for the AWS platform, and technologies in use at their organization.

We thought the ability to create custom lab-based, interactive training experiences in a highly specific way would be popular. But, we had no idea the response would be so overwhelmingly positive.

So today, we are excited to announce new Custom Hands-on Labs support for both Microsoft Azure and Google Cloud Platform.

This means that the same process you have used to create and publish a customized Hands-on Lab for AWS can also be used for Azure and Google Cloud as well. The team has made it incredibly easy and a matter of minutes to publish a Custom Hands-on Lab.

How to Build Custom Hands-on Labs for Azure and Google Cloud Platform

Just navigate to Content Engine for Labs and select a platform. You can select a base environment and browse our library of pre-built Labs steps at your disposal.

Custom Azure and GCP Hands-on Labs

 

You can even insert your own steps to, for example, leverage an internal asset or offer additional guidance about how a service is used (or not used!) across your team or organization.

Custom Azure Lab Steps

 

Finally, publish the Lab by selecting who should see it and place it in a Library category so that end users can easily find it when they’re logged into Cloud Academy.

Custom Hands-on Labs for Azure and GCP
Custom Hands-on Labs for Azure and GCP

At Cloud Academy, we are focused on delivering innovative, enterprise-ready training technology that enables organizations to continuously improve the way they do business. We help our customers build practitioners, not paper tigers. Expect Cloud Academy to continue to be the first to market with Lab innovations and ways for you to easily contextualize training to your actual production environment.

If you aren’t already on this journey to deliver technology training in an effective way, we hope you join us. Schedule a demo with a member of our team today and see how you can transform your organization’s training strategy.

For those on the journey with us, thank you. If you have questions about building Custom Hands-on Labs for your teams, please speak with your Customer Success Manager.

The post NEW: Custom Hands-On Labs for Azure and Google Cloud Platform appeared first on Cloud Academy.

TomTom’s new GPS uses IFTTT to interact with your smart home

The content below is taken from the original ( TomTom’s new GPS uses IFTTT to interact with your smart home), to continue reading please visit the site. Remember to respect the Author & Copyright.

TomTom is mostly focusing on driverless navigation after stepping away from wearables and action cams. However, it still makes consumer GPS units, and to keep up with smartphones, has unveiled the TomTom Go Premium with IFTTT home automation tech bui…

Analysis of network connection data with Azure Monitor for virtual machines

The content below is taken from the original ( Analysis of network connection data with Azure Monitor for virtual machines), to continue reading please visit the site. Remember to respect the Author & Copyright.

Azure Monitor for virtual machines (VMs) collects network connection data that you can use to analyze the dependencies and network traffic of your VMs. You can analyze the number of live and failed connections, bytes sent and received, and the connection dependencies of your VMs down to the process level. If malicious connections are detected it will include information about those IP addresses and threat level. The newly released VMBoundPort data set enables analysis of open ports and their connections for security analysis.

To begin analyzing this data, you will need to be on-boarded to Azure Monitor for VMs.

Workbooks

If you would like to start your analysis with a prebuilt, editable report you can try out some of the Workbooks we ship with Azure Monitor for VMs. Once on-boarded you navigate to Azure Monitor and select Virtual Machines (preview) from the insights menu section. From here, you can navigate to the Performance or Map tab to see a link for View Workbook that will open the Workbook gallery which includes the following Workbooks that analyze our network data:

  • Connections overview
  • Failed connections
  • TCP traffic
  • Traffic comparison
  • Active ports
  • Open ports

These editable reports let you analyze your connection data for a single VM, groups of VMs, and virtual machine scale sets.

Log Analytics

If you want to use Log Analytics to analyze the data, you can navigate to Azure Monitor and select Logs to begin querying the data. The logs view will show the name of the workspace that has been selected and the schema within that workspace. Under the ServiceMap data type you will find two tables:

  • VMBoundPort
  • VMConnection

You can copy and paste the queries below into the Log Analytics query box to run them. Please note, you will need to edit a few of the examples below to provide the name of a computer that you want to query.

Screenshot of copying and pasting queries into the Log Analytics query box

Common queries

Review the count of ports open on your VMs, which is useful when assessing which VMs configuration and security vulnerabilities.

VMBoundPort
| where Ip != "127.0.0.1"
| summarize by Computer, Machine, Port, Protocol
| summarize OpenPorts=count() by Computer, Machine
| order by OpenPorts desc

List the bound ports on your VMs, which is useful when assessing which VMs configuration and security vulnerabilities.

VMBoundPort
| distinct Computer, Port, ProcessName

Analyze network activity by port to determine how your application or service is configured.

VMBoundPort
| where Ip != "127.0.0.1"
| summarize BytesSent=sum(BytesSent), BytesReceived=sum(BytesReceived), LinksEstablished=sum(LinksEstablished), LinksTerminated=sum(LinksTerminated), arg_max(TimeGenerated, LinksLive) by Machine, Computer, ProcessName, Ip, Port, IsWildcardBind
| project-away TimeGenerated
| order by Machine, Computer, Port, Ip, ProcessName

Bytes sent and received trends for your VMs.

VMConnection
| summarize sum(BytesSent), sum(BytesReceived) by bin(TimeGenerated,1hr), Computer
| order by Computer desc
//| limit 5000
| render timechart

If you have a lot of computers in your workspace, you may want to uncomment the limit statement in the example above. You can use the chart tools to view either bytes sent or received, and to filter down to specific computers.

Screenshot of chart tools being used to view Bytes sent or received

Connection failures over time, to determine if the failure rate is stable or changing.

VMConnection
| where Computer == <replace this with a computer name, e.g. ‘acme-demo’>
| extend bythehour = datetime_part("hour", TimeGenerated)
| project bythehour, LinksFailed
| summarize failCount = count() by bythehour
| sort by bythehour asc
| render timechart

Link status trends, to analyze the behavior and connection status of a machine.

VMConnection
| where Computer == <replace this with a computer name, e.g. ‘acme-demo’>
| summarize  dcount(LinksEstablished), dcount(LinksLive), dcount(LinksFailed), dcount(LinksTerminated) by bin(TimeGenerated, 1h)
| render timechart

Screenshot of line chart showing query results from the last 24 hours

Getting started with log queries in Azure Monitor for VMs

To learn more about Azure Monitor for VMs, please read our overview, “What is Azure Monitor for VMs (preview).” If you are already using Azure Monitor for VMs, you can find additional example queries in our documentation for querying data with Log Analytics.

Monitor Your Website’s Availability with Azure Application Insights

The content below is taken from the original ( Monitor Your Website’s Availability with Azure Application Insights), to continue reading please visit the site. Remember to respect the Author & Copyright.


Do you know the uptime statistics for your website? If not, Azure App Insights makes the statistics easy to find. App Insights can execute both basic and multi-step availability tests against any http or https URL. It doesn’t matter if your site is hosted on a VM, in Azure App Services, or on a Raspberry Pi in your closet. If the web site is public, AppInsights can run these availability tests in addition to collecting telemetry, metrics, and errors.

The first step is to create an App Insights resource in Azure. If you’ve never setup App Insights before, then you can follow the directions in Microsoft’s App Insights overview.

Once the resource is ready, go to the Availability blade and click the “Add Test” button. If you created App Insights in concert with an App Service resource, Azure might have already configured a default test. You can click on the Details tab to see the list of all existing tests.

There are two types of tests in App Insights. The first is the basic ping test. With the ping test you give AppInsights a single URL to examine. The other type of test is the multi-step web test where you give App Insights an XML .webtest file. If you’ve ever generated load or web tests using Visual Studio, then you’ll recognize the .webtest file extension. Just be aware that Microsoft has deprecated webtest features in Visual Studio, so I’d stick to using the ping test.

Test Configuration

With the ping test, App Insights will send a web request to the URL you specify and watch for a timely response. You can setup a ping test to parse dependent requests, meaning if your test URL responds with HTML, App Insights can go looking for resources embedded in the HTML, like script and image resources. App Insights will fire off requests for these embedded resources, too.

Configuring availability tests in App Insights

 

You can configure the test frequency to run tests every 5, 10, or 15 minutes. What’s even more interesting is that you can specify multiple locations to originate the test requests. I have an availability test for my personal web site, for example. I host my site, odetocode.com, in Azure’s East US region. I run availability tests from East US, West US, UK South, North Europe, and Australia East regions. It’s interesting to see the latency difference between tests that start from opposite sides of the earth. Most of the test requests from the East US respond in less than 110 ms. The high points you see in the chart below are the occasional requests from Australia that run more than 500 ms.

Availability results over the last 24 hours.

 

For each test you can specify the criteria for success. With my site, for example, I look for an HTTP status code 200 in the response, and I ask the response time to be less than 30 seconds. I also configured App Insights to fire an alert of at least 3 locations fail the basic test in a 5 minute window.

App Insights saves all the test results so you can explore the results later, either by charting the results with a custom time range right from the Azure portal or sending queries to the App Insights API. You can drill into failures and, if you hook your application telemetry into App Insights, look at telemetry around the time of a test. I had one test recently that stood out because the test waited over 90 seconds for a response. Taking a closer look, I saw the response never reached my web site, and I could sleep better knowing the problem was not in my code or configuration. I could blame the slow response on a DNS server.

Additional Resources

App Insights gives you everything you need to monitor the availability of your web sites, and more. Here are some additional resources to find out more.

The post Monitor Your Website’s Availability with Azure Application Insights appeared first on Petri.

Microsoft Retires Windows 10 Semi-Annual Channel Targeted Releases

The content below is taken from the original ( Microsoft Retires Windows 10 Semi-Annual Channel Targeted Releases), to continue reading please visit the site. Remember to respect the Author & Copyright.


Windows Update for Business (WUfB) is a feature in Windows 10 that allows organizations to control how clients are updated without requiring any local infrastructure. In Windows 7 it was common to deploy Windows Server Update Services (WSUS) to control which updates were applied and when. Windows 10 can still be used with WSUS but WUfB provides a way to distribute updates without the costs involved in deploying WSUS.

For more information on how WUfB works and configuring it, check out Understanding Windows Update for Business and Windows 10 Tip: Configure Windows Update for Business using Group Policy on Petri.

As it stands in Windows 10 version 1809 and previous releases, you can choose to configure devices to update on the Semi-Annual Channel (SAC) or Semi-Annual Channel (Targeted). Devices set to update on SAC-T receive Windows 10 feature updates as soon as they are made available. But SAC deploys feature updates roughly three months after they are made generally available, or in other words, when Microsoft deems the update to be ‘business ready’. But it is up to you to make sure cumulative updates (CUs) get applied because the SAC release of Windows 10 is the same OS build as the previous SAC-T release. For example, the SAC-T release of Windows 10 version 1803 was OS build 17134.648 and the SAC release was the same OS build number.

A SAC Only World

As you are probably already aware, there are only two feature updates to Windows 10 each year. So, to simplify the terminology and release schedule, starting in Windows 10 19H1, Microsoft is retiring SAC-T. There will be no SAC-T ‘release’ of Windows 10 and the WUfB user interface will be updated to reflect this. There will be one SAC release for each Windows 10 feature update.

That means that if you currently have WUfB configured for SAC-T, when the upgrade to 19H1 happens, nothing will change. If you had a deferral period configured for SAC-T, it will be applied to SAC. If you have configured devices for SAC, any deferral period configured will have 60 days added to it for the upgrade to 19H1 only. Although you won’t see that change in the WUfB user interface as it will be handled on the service side by Microsoft. But after the upgrade to 19H1, the deferral value will be set to whatever it was before the upgrade.

Microsoft keeps changing how it delivers Windows 10 as a service and the support lifecycle for each version. Going forwards, if you want to delay the deployment of Windows 10 feature updates, you will need to set an appropriate deferral value for SAC. I appreciate there is an attempt to simplify the terminology here – especially as ‘Targeted’ implied that it updated a small group of devices – but it’s not clear if Microsoft will allow you to defer SAC deployment beyond 365 days in version Windows 10 19H1. For more information about these changes, see Microsoft’s website here.

The post Microsoft Retires Windows 10 Semi-Annual Channel Targeted Releases appeared first on Petri.

Windows Virtual Desktop now in public preview on Azure

The content below is taken from the original ( Windows Virtual Desktop now in public preview on Azure), to continue reading please visit the site. Remember to respect the Author & Copyright.

We recently shared the public preview of the Windows Virtual Desktop service on Azure. Now customers can access the only service that delivers simplified management, multi-session Windows 10, optimizations for Office 365 ProPlus, and support for Windows Server Remote Desktop Services (RDS) desktops and apps. With Windows Virtual Desktop, you can deploy and scale your Windows desktops and apps on Azure in minutes, while enjoying built-in security and compliance.

Image of women on her desktop in the workplace

This means customers can now virtualize using multi-session Windows 10, Windows 7, and Windows Server desktops and apps (RDS) to Windows Virtual Desktop for a simplified management and deployment experience with Azure. We also built Windows Virtual Desktop as an extensible solution for our partners, including Citrix, Samsung, and Microsoft Cloud Solution Providers (CSP).

Access to Windows Virtual Desktop is available through applicable RDS and Windows Enterprise licenses. With the appropriate license, you just need to set up an Azure subscription to get started today. You can choose the type of virtual machines and storage you want to suit your environment. You can optimize costs by taking advantage of Reserved Instances with up to a 72 percent discount and using multi-session Windows 10.

You can read more detail about Windows Virtual Desktop in the Microsoft 365 blog published today by Julia White and Brad Anderson.

Get started with the public preview today.

Now generally available: Plug-in for VMware vRealize Automation

The content below is taken from the original ( Now generally available: Plug-in for VMware vRealize Automation), to continue reading please visit the site. Remember to respect the Author & Copyright.

Today, we’re announcing that our plug-in for VMware vRealize Automation (vRA) is now generally available for all users, providing an additional way for VMware customers to manage and consume Google Cloud resources.

IT operators can use Google-provided blueprints or build their own blueprints for Google Cloud resources such as VM instances, Google Kubernetes Engine clusters, and Cloud Storage buckets to publish to the vRA service catalog. End users can select and launch resources in a predictable manner using familiar tools.

In this launch, we have added a number of new features and enhancements based on customer feedback. The following are some of the key features and improvements in addition to reliability and performance updates:

New features

  • Support for new services: GKE, Cloud SQL, Cloud Spanner, Cloud Pub/Sub, Cloud Key Management Service, Cloud Filestore (beta), and IAM Service Accounts
  • Improved VM Instance workflows including set Windows password, execute SSH command, retrieve serial port output, and restore from snapshot

Enhancements

  • Support for http proxy settings when creating a connection.
  • Workflows to simplify the import of XaaS custom resources and blueprints into vRA.
  • Default options for the create VM Instance workflow.
  • View estimated monthly cost in the create VM Instance workflow.
  • Workflows to capture errors and optionally email to support.
  • Improved connection synchronization handling on vRealize Orchestrator (vRO) clusters.
  • First-class support for health check management.
  • User documentation for vRO scripting objects.

To download the plug-in and get started, visit the Google solutions page.

To learn more about how you can adapt your existing technology to a hybrid cloud, visit our hybrid cloud solutions page. You can also find more information on vRA and the plug-in by reading VMware’s blog.

What can cyclist legally do, and not do, in Europe?

The content below is taken from the original ( What can cyclist legally do, and not do, in Europe?), to continue reading please visit the site. Remember to respect the Author & Copyright.

Planning a trip abroad? Here’s what you need to know about the law in top European destinations

And so it begins… a two-hour grind to the top of the world

Police car in front of the Brandenburg Gate

Planning a trip abroad? Here’s what you need to know about the law in top European destinations

Get ready for Global Azure Bootcamp 2019 – http://bit.ly/2YhkDes

The content below is taken from the original ( Get ready for Global Azure Bootcamp 2019 – https://aka.ms/azfr/534), to continue reading please visit the site. Remember to respect the Author & Copyright.

Get ready for Global Azure Bootcamp 2019 - https://aka.ms/azfr/534 submitted by /u/robcaron to r/AZURE
[link] [comments]

How to Automate Parenting With IFTTT

The content below is taken from the original ( How to Automate Parenting With IFTTT), to continue reading please visit the site. Remember to respect the Author & Copyright.

IFTTT, the tool that lets you automate your digital life, can help any parent whose mental load has reached max capacity—and you kind of feel like a tech magician every time you use it. Here are some great IFTTT applets that can make parenting easier. (If you’re new to the service, check out our beginner’s guide to…

Read more…

Fifty years of the internet

The content below is taken from the original ( Fifty years of the internet), to continue reading please visit the site. Remember to respect the Author & Copyright.

When my team of graduate students and I sent the first message over the internet on a warm Los Angeles evening in October, 1969, little did we suspect that we were at the start of a worldwide revolution. After we typed the first two letters from our computer room at UCLA, namely, “Lo” for “Login,” the network crashed.

Hence, the first internet Internet message was “Lo” as in “Lo and behold” – inadvertently, we had delivered a message that was succinct, powerful, and prophetic.

The ARPANET, as it was called back then, was designed by government, industry and academia so scientists and academics could access each other’s computing resources and trade large research files, saving time, money and travel costs. ARPA, the Advanced Research Projects Agency, (now called “DARPA”) awarded a contract to scientists at the private firm Bolt Beranek and Newman to implement a router, or Interface Message Processor; UCLA was chosen to be the first node in this fledgling network.

By December, 1969, there were only four nodes – UCLA, Stanford Research Institute, the University of California-Santa Barbara and the University of Utah. The network grew exponentially from its earliest days, with the number of connected host computers reaching 100 by 1977, 100,000 by 1989, a million by the early 1990’s, and a billion by 2012; it now serves more than half the planet’s population.

Along the way, we found ourselves constantly surprised by unanticipated applications that suddenly appeared and gained huge adoption across the Internet; this was the case with email, the World Wide Web, peer-to-peer file sharing, user generated content, Napster, YouTube, Instagram, social networking, etc.

It sounds utopian, but in those early days, we enjoyed a wonderful culture of openness, collaboration, sharing, trust and ethics. That’s how the Internet was conceived and nurtured.  I knew everyone on the ARPANET in those early days, and we were all well-behaved. In fact, that adherence to “netiquette” persisted for the first two decades of the Internet.

Today, almost no one would say that the internet was unequivocally wonderful, open, collaborative, trustworthy or ethical. How did a medium created for sharing data and information turn into such a mixed blessing of questionable information? How did we go from collaboration to competition, from consensus to dissention, from a reliable digital resource to an amplifier of questionable information?

The decline began in the early 1990s when spam first appeared at the same time there was an intensifying drive to monetize the Internet as it reached deeply into the world of the consumer. This enabled many aspects of the dark side to emerge (fraud, invasion of privacy, fake news, denial of service, etc.).

It also changed the nature of internet technical progress and innovations as risk aversion began to stifle the earlier culture of “moon shots”. We are currently still suffering from those shifts. The internet was designed to promote decentralized information, democracy and consensus based upon shared values and factual information. In this it has disappointed to fully achieve the aspirations of its founding fathers.

As the private sector gained more influence, their policies and goals began to dominate the nature of the Internet.  Commercial policies gained influence, companies could charge for domain registration, and credit card encryption opened the door for e-commerce. Private firms like AOL, CompuServe and Earthlink would soon charge monthly fees for access, turning the service from a public good into a private enterprise.

This monetization of the internet has changed it flavor. On the one hand, it has led to valuable services of great value. Here one can list pervasive search engines, access to extensive information repositories, consumer aids, entertainment, education, connectivity among humans, etc.  On the other hand, it has led to excess and control in a number of domains.

Among these one can identify restricted access by corporations and governments, limited progress in technology deployment when the economic incentives are not aligned with (possibly short term) corporate interests, excessive use of social media for many forms of influence, etc.

If we ask what we could have done to mitigate some of these problems, one can easily name two.  First, we should have provided strong file authentication – the ability to guarantee that the file that I receive is an unaltered copy of the file I requested. Second, we should have provided strong user authentication – the ability for a user to prove that he/she is whom they claim to be.

Had we done so, we should have turned off these capabilities in the early days (when false files were not being dispatched and when users were not falsifying their identities). However, as the dark side began to emerge, we could have then gradually turned on these protections to counteract the abuses at a level to match the extent of the abuse. Since we did not provide an easy way to provide these capabilities from the start, we suffer from the fact that it is problematic to do so for today’s vast legacy system we call the Internet.

A silhouette of a hacker with a black hat in a suit enters a hallway with walls textured with blue internet of things icons 3D illustration cybersecurity concept

Having come these 50 years since its birth, how is the Internet likely to evolve over the next 50? What will it look like?

That’s a foggy crystal ball. But we can foresee that it is fast on its way to becoming “invisible” (as I predicted 50 years ago) in the sense that it will and should disappear into the infrastructure.

It should be as simple and convenient to use as is electricity; electricity is straightforwardly available via a trivially simple interface by plugging it into the wall; you don’t know or care how it gets there or where it comes from, but it delivers its services on demand.

Sadly, the internet is far more complicated to access than that. When I walk into a room, the room should know I’m there and it should provide to me the services and applications that match my profile, privileges and preferences.  I should be able to interact with the system using the usual human communication methods of speech, gestures, haptics, etc.

We are rapidly moving into such a future as the Internet of Things pervades our environmental infrastructure with logic, memory, processors, cameras, microphones, speakers, displays, holograms, sensors. Such an invisible infrastructure coupled with intelligent software agents imbedded in the internet will seamlessly deliver such services. In a word, the internet will essentially be a pervasive global nervous system.

That is what I judge will be the likely essence of the future infrastructure. However, as I said above, the applications and services are extremely hard to predict as they come out of the blue as sudden, unanticipated, explosive surprises!  Indeed, we have created a global system for frequently shocking us with surprises – what an interesting world that could be!

RTPSUG – Automating Active Directory Health Checks

The content below is taken from the original ( RTPSUG – Automating Active Directory Health Checks), to continue reading please visit the site. Remember to respect the Author & Copyright.

I am so excited to be unveiling a new toolkit I have been working on with some amazing people from the PowerShell community. Join me Wednesday evening at 6:30 PM EST to see what I have created!

This toolkit, called PSADHeath, was built on the idea that most admins don’t have the time to keep an eye on every part of their IT infrastructure, let alone every nook and cranny of Active Directory. I wanted to fill in the gaps that my monitoring tools weren’t able to cover, so I started to write this toolset to help keep an eye on some of the core components of Active Directory.

You can download my module and run it as is, or maybe you prefer to download, pick it apart and make your set of tools. Either way, my goal is to provide a tool to you that helps you get the job done!

Join me Wednesday as I do a walkthrough of the basics of this toolkit and how you can get it running in just a few minutes. One thing that I hope is abundantly clear is that this toolkit is not very hard to use and you can extend to make it do almost anything you can think of.

A big thank you goes out to Steve Valdinger and Greg Onstat for their efforts over the last few months! You guys rock and this project wouldn’t exist without your expertise. Thank you so much!

Click here to sign up and get more details. https://www.meetup.com/Research-Triangle-PowerShell-Users-Group/events/259109483/

submitted by /u/compwiz32 to r/PowerShell
[link] [comments]