Validating and Improving the RTO and RPO Using AWS Resilience Hub

The content below is taken from the original ( Validating and Improving the RTO and RPO Using AWS Resilience Hub), to continue reading please visit the site. Remember to respect the Author & Copyright.

“Everything fails, all the time”, a famous quote from Werner Vogels, Vogles, VP and CTO of Amazon.com. When you design and build an application, a typical goal is to have it working, the next is to keep it running, no matter what disruptions may occur. It is crucial to achieve resiliency, but you need to consider how to define it first and which metric to use to determine your application’s resiliency against. Resiliency can be defined in terms of metrics called RTO (Recovery Time Objective) and RPO (Recovery Point Objective). RTO is a measure of how quickly can your application recover after an outage and RPO is a measure of the maximum amount of data loss that your application can tolerate.

To learn more on how to establish RTO and RPO for your application, please refer  “Establishing “Establishing RTO and RPO Targets for cloud applications” applications”

AWS Resilience Hubis a new service launched in November 2021. This service is designed to help you define, validate and track the resilience of your applications on the AWS cloud.

You can define the resilience policiesfor your applications. These policies include RTO and RPO targets for applications, infrastructure, availability zone, and region disruptions. Resilience Hub’s  assessment uses best practices from the AWS Well-Architected Framework.  It will analyze the components of an application such as compute, storage, database and
network and uncover potential resilience weaknesses.

In this blog we will show you how Resilience Hub can help you validate RTO and RPO at component level for four types of disruptions, which in-turn can help you improve the resiliency of your entire application stack.

  1. Customer Application RTO and RPO
  2. AWS Infrastructure RTO and RPO
  3. Cloud Infrastructure Availability Zone (AZ) disruption
  4. AWS Region disruption
  5. outage

  6. AWS Infrastructure Region outage

Customer Application RTO and RPO

Application outages occur when the infrastructure stack (hardware) is healthy but the application stack (software) is not. This outage may be caused by configuration changes, bad code deployments, integration failures, etc. Determining RTO and RPO for application stacks depends on the criticality and importance of the application, as well as your compliance requirements. For example, mission critical application could have an RTO and RPO of 5 minutes

Example: Your critical business application is hosted from Amazon Simple Storage Service (Amazon S3)bucket and you set it up without cross region replication and versioning. Figure 1 shows that application RTO and RPO are unrecoverable based on a target of 5 minutes RTO and RPO

Figure 1. Resilience Hub assessment of the Amazon S3 bucket against Application RTO

Figure 1. Resilience Hub assessment of the Amazon S3 bucket against Application RTO

After running the assessment, Resilience Hub provides recommendation to enable versioning on Amazon S3 bucket as shown in Figure 2.

Figure 2. Resilience recommendation for Amazon S3

After enabling the versioning, you can achieve the estimated RTO of 5m and RPO RTO of 0s. Versioning allows you to preserve, retrieve, and restore any version of any object stored in a bucket improving your application resiliency.

Resilience Hub also provides the cost associated with implementing the recommendations. In this case, there is no cost for enabling versioning on Amazon S3 bucket. Normal S3 pricing applies to each version of an object. You can store any number of versions of the same object, so you may want to implement some expiration and deletion logic if you plan to make use of versioning.

Resilience Hub can provide one or more than one recommendation to satisfy the requirements such as cost, high availability and least changes. As shown in Figure 2, adding versioning for S3 bucket satisfies both high availability optimization and best attainable architecture with least changes.

Cloud Infrastructure RTO and RPO

Cloud Infrastructure outage occurs when the underlying components for infrastructure, such as hardware fail. Consider a scenario where a partial outage occurred because of a component failure.

For example, one of the components in your mission critical application is anAmazon Elastic Container Service (ECS)running on Elastic Compute Cloud (EC2) instance, and your targeted infrastructure RTO  and RPO of 1 second .Figure 3 shows that you are unable to meet your targeted infrastructure RTO of 1 second

Figure 3. Resilience Hub assessment of the ECS application against Infrastructure

Figure 4 shows that Resilience Hub’s recommendation to add  AWS Auto scaling Groups and Amazon ECS Capacity Providers in multiple AZs.

Figure 4. Resilience Hub recommendation for ECS cluster - to add Auto Scaling Groups and Capacity providers in multiple AZs.

Figure 4. Resilience Hub recommendation for ECS cluster – to add Auto Scaling Groups and Capacity providers in multiple AZs.

AWS Auto Scaling monitors your applications and automatically adjusts capacity to maintain steady, predictable performance at the lowest possible cost. Amazon ECS capacity providers are used to manage the infrastructure the tasks in your clusters use. It can use AWS Auto Scaling groups to automatically manage the Amazon EC2 instances registered to their clusters. By applying the Resilience Hub recommendation, you will achieve an estimated RTO and RPO of near zero seconds and the estimated cost for the change is $16.98/month.

AWS Infrastructure Availability Zone (AZ) disruption outage

The AWS global infrastructure is built around AWS Regions and Availability Zone to achieve High Availability (HA). AWS Regions provide multiple physically separated and isolated Availability Zones, which are connected with low-latency, high-throughput, and highly redundant networking.

For example, you have setup a public NAT gateway in Single-AZ Single-Az to allow instances in a private subnet to send outbound traffic to the internet. You have deployed Amazon Elastic Compute Cloud (Amazon EC2)instances in multiple availability zones.

Figure 5. Resilience Hub assessment of the Single-Az NAT gateway

Figure 5 shows Availability Zone disruptions as unrecoverable and does not meet the Availability Zone RTO goals. NAT gateways are fully managed services, there is no hardware to manage, so they are resilient (0s RTO) for infrastructure failure. However, deploying only one NAT gateway in Single AZ Az leaves the architecture vulnerable. If the NAT gateway’s Availability Zone is down, resources deployed in other Availability Zones lose internet access.

Figure 6. Resilience Hub recommendation to create Availability Zone-independent NAT Gateway architecture.

Figure 6 shows Resilience Hub’s recommendation to deploy NAT Gateways into each Availability Zone where corresponding EC2 resources are located.

Following Resilience Hub’s recommendation, you can achieve the lowest possible RTO and RPO of 0 seconds in the event of an Availability Zone disruption and create an Availability Zone-independent architecture run on $32.94 per month.

NAT Gateway deployment in multiple Azs can achieve the lowest RTO/RPO for Availability Zone disruption, the lowest cost, and the minimal changes, so the recommendation is the same for all three options.

AWS Region disruption outage

An AWS Region consists of multiple, isolated, and physically separated AZs within a geographical area. This design achieves the greatest possible fault tolerance and stability. For a disaster event that includes the risk of disruption of multiple data centers or a regional service disruption, losing multiple data centers, it’s a best practice to consider multi-region disaster recovery strategy to mitigate against natural and technical disasters that can affect an entire Region within AWS. If one or more Regions or regional service that your workload uses are unavailable, this type of disruption outage can be resolved by switching to a secondary Region. It may be necessary to define a regional RTO and RPO if you have a Multi-Region dependent application.

For example, you have a Single-AZ  Single-Az Amazon RDS for MySQLas part of a global mission-critical application and you have configured 30 min RTO and 15 minute RPO for all four disruption types. Each RDS instance runs on an Amazon EC2instance backed by an Amazon Elastic Block Store (Amazon EBS)volume for storage. RDS takes daily snapshots of the database, which are stored durably in Amazon S3 behind the scenes. It also regularly copies transaction logs to S3—up to 5 min utes intervals—providing point-in time-recovery when needed.

If an underlying EC2 instance suffers a failure, RDS automatically tries to launch a new instance in the same Availability Zone, attach the EBS volume, and recover. In this scenario, RTO can vary from minutes to hours. The duration depends on the size of the database, and failure and recovery approach. RPO is zero in the case of recoverable instance failure because the EBS volume was recovered. If there is an Availability Zone disruption, you can create a new instance in a different Availability Zone using point-in-time recovery. Single-AZ Single-Az does not give you protections against regional disruption. distribution. Figure 7 shows that you are not able to meet regional RTO of 30 min and RPO of 15 mins.

Figure 7. Resilience Hub assessment for the Amazon RDS

Figure 8. Resilience Hub recommendation to achieve region level RTO and RPO

As shown in Figure 8, Resilience Hub provides you three recommendations to optimize in order to handle Availability Zone disruptions, be cost effective and to have minimal changes.

Recommendation 1 “Optimize for Availability Zone RTO/RPO”: The changes recommended under this option will help you achieve the lowest possible RTO and RPO in the event of an Availability Zone disruption. For a Single-AZ Single-Az RDS, Resilience Hub recommends to change the Database to Aurora and add two read replica same region to achieve targeted RTO and RPO for Availability Zone failure. It also recommends to add a read replica in different region to achieve resiliency for regional disruption. Estimated cost for these changes as shown in Figure 8 is $66.85 per month.

Amazon Auroraread replicas share the same data volume as the original database instance. Aurora handles the Availability Zone disruption by fully automating the failover with no data loss. Aurora creates highly available database cluster with synchronous replication across multiple AZs. This is considered to be the better option for production databases where data backupis a critical consideration.

Recommendation 2 “Optimize for cost”: These changes will optimize your application to reach the lowest cost that will still meet your targeted RTO and RPO. The recommendation here is to keep a Single-AZ Single-Az Amazon RDS and create the read replica in primary region with additional read replica in the secondary / different region. The estimated cost for these changes is $54.38 per month. You can promote a read replica to a standalone instance as a disaster recovery solution if the primary DB instance fails or unavailable during region disruption.

Recommendation 3 “Optimize for minimal changes”: These changes will help you to meet targeted RTO and RPO while keeping implementation changes to minimal. Resilience Hub recommends to create a  Multi-AZ Multi-Az writer and a Multi-AZ Multi-Az read replicain two different regions. Estimate cost for changes is $81.56 per month. When you provision a Multi-AZ Multi-Az Database instance, Amazon RDS automatically creates a primary Database instance and synchronously replicates the data to a standby instance in a different Availability Zone. In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby Database instance. Since the endpoint for your Database instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention

Although all three recommendations help you achieve a targeted application RTO and RPO of 30 mins, the estimated costs and efforts may vary.

Conclusion

To build a resilient workload, you need to have right best practices in place. In this post, we showed you how to improve the resiliency of your business application and achieve targeted RTO and RPO for application, infrastructure, Availability Zone, and Region disruptions using recommendations provided by Resilience Hub. To learn more and try the service by yourself, visit AWS Resilience Hubpage.

Authors:

Monika Shah

Monika Shah is a Technical Account Manager at AWS where she helps customers navigate through their cloud journey with a focus on financial and operational efficiency. She worked for Fortune 100 companies in the networking and telecommunications fields for over a decade. In addition to her Masters degree in Telecommunications, she holds industry certifications in networking and cloud computing. In her free time, she enjoys watching thriller and comedy TV shows, playing with children, and exploring different cuisines.

Divya Balineni

Divya is a Sr. Technical Account Manager with Amazon Web Services. She has over 10 years of experience managing, architecting, and helping customers with resilient architectures to meet their business continuity needs. Outside of work, she enjoys gardening and travel.

Rescuezilla 2.4 is here: Grab it before you need it

The content below is taken from the original ( Rescuezilla 2.4 is here: Grab it before you need it), to continue reading please visit the site. Remember to respect the Author & Copyright.

A fork of Redo Rescue that outdoes the original – and beats Clonezilla too

Version 2.4 of Rescuezilla – which describes itself as the “Swiss Army Knife of System Recovery,” – is here and based on Ubuntu 22.04.…

Use a PowerShell Substring to Search Inside a String

The content below is taken from the original ( Use a PowerShell Substring to Search Inside a String), to continue reading please visit the site. Remember to respect the Author & Copyright.

Need to search for a string inside a string? Never fear, PowerShell substring is here! In this article, I guide you through how to ditch objects and search inside strings.

The PowerShell substring

I love being on social media because I always come across something interesting related to PowerShell. Sometimes it is a trick I didn’t know about or a question someone is trying to figure out. I especially like the questions because it helps me improve my understanding of how people learn and use PowerShell. Workload permitting, I’m happy to jump in and help out.

One such recent challenge centered on string parsing. Although you’ll hear me go on and on about object in the pipeline, there’s nothing wrong with parsing strings if that’s what you need. There are plenty of log files out there that need parsing, and PowerShell can help.

Search for a string in a string

In this case, I’m assuming some sort of log file is in play. I don’t know what the entire log looks like or what the overall goal is. That’s OK. We can learn a lot from the immediate task. Let’s say you have a string that looks like this:

Mailbox:9WJKDFH-FS349-1DSDS-OIFODJFDO-7F21-FC1BF02EFE26 (O'Hicks, Jeffery(X.))

I’ve changed the values a little bit and modified my name to make it more challenging. The goal is to grab the name from the string. I want to end up with:

O'Hicks, Jeffery(X.)

There are several different ways you can accomplish this. The right way probably depends on your level of PowerShell experience and what else you might want to accomplish. I’ll start by assigning this string to variable $s.

Using the PowerShell split operator

When I am faced with string parsing, sometimes it helps to break the string down into more manageable components. To do that I can use the split operator. There is also a split method for the string class. I am going to assume you will take some time later to read more about PowerShell split.

$s -split "\s",2

The “\s” is a regular-expression pattern that means a space. The 2 parameter value indicates that I only want two substrings. In other words, split on the first space found.

Using the split operator in Windows PowerShell. (Image Credit: Jeff Hicks)

Using the split operator in Windows PowerShell. (Image Credit: Jeff Hicks)

I end up with an array of two elements. All I need is the second one.

$t = ($s -split "\s",2)[1]

Using the substring method

Now for the parsing fun, let’s use the string object’s SubString() method.

$t.Substring(1,$t.length-2)

I am telling PowerShell to get part of the string in $t, starting at character position 1 and then getting the next X number of characters. In this case, X is equal to the length of the string, $t, minus 2. This has the net effect of stripping off the outer parentheses.

Using the PowerShell substring method

Using the PowerShell substring method (Image Credit: Jeff Hicks)

Here’s a variation, where I split on the “(” character.

($s -split "\(",2)
$t = ($s -split "\(",2)[1]
$t.Substring(0,$t.length-1)

Using the split operator on a character in Windows PowerShell. (Image Credit: Jeff Hicks)

Using the split operator on a character in Windows PowerShell. (Image Credit: Jeff Hicks)

The difference that this gets rid of the leading parenthesis. So, all I need to do is get everything up to the last character. By the way, if you look at a string object with Get-Member, you will see some Trim methods. These are for removing leading and/or training white space. Those methods don’t apply here.

Split a string using array index numbers

There’s one more way to split a string that you might find useful. You can treat all strings as an array of characters. This means you can use array index numbers to reference specific array elements.

Counting elements in an array starts at 0. If you run $t[0] you’ll get the first element of the array, in this case ‘O’. You can also use the range operator.

$t[0..5]

An alternative to splitting a string in Windows PowerShell. (Image Credit: Jeff Hicks)

An alternative to splitting a string in Windows PowerShell. (Image Credit: Jeff Hicks)

Right now, $t has an extra ) at the end that I don’t want. I need to get everything up to the second-to-last element.

$t[0..($t.length-2)]

This will give me an array displayed vertically, which you can see in the screenshot above. With that said, it’s easy to join the spliced string back together.

-join $t[0..($t.length-2)]

It might look a bit funny to lead with an operator, but if you read about_join, then you’ll see this is a valid approach that works.

Using the -join operator to put our string back together. (Image Credit: Jeff Hicks)

Using the -join operator to put our string back together. (Image Credit: Jeff Hicks)

Simple function for string parsing

I’m assuming you want an easy way to do this type of parsing, so I wrote a simple function.

Function Optimize-String {
[cmdletbinding()]
Param(
[Parameter(Position=0,Mandatory,HelpMessage="Enter a string of text")]
[ValidateNotNullorEmpty()]
[string]$Text,
[int]$Start=0,
[int]$End=0
)
#trim off spaces
$string = $Text.Trim()
#get length value when starting at 0
$l = $string.Length-1
#get array elements and join them back into a string
-join $string[$Start..($l-$end)]
} #end function

The function takes a string of text and returns the substring minus X number of characters from the start and X number of characters from the end. Now I have a easy command to parse strings.

Using the optimize-string function in Windows PowerShell. (Image Credit: Jeff Hicks)

Using the optimize-string function in Windows PowerShell. (Image Credit: Jeff Hicks)

If you look through the code, then you’ll see I am using the Trim() method. I am using it because I don’t want any extra spaces at the beginning or end of the string to be included.

Going back to my original string variable, I can now parse it with a one-line command:

optimize-string ($s -split("\s",2))[1] -Start 1 -end 1

Using the optimize-string function to search inside a string

Using the optimize-string function to search inside a string (Image Credit: Jeff Hicks)

If it makes more sense to you to break this into separate steps, that’s OK.

$arr = $s -split("\s",2)
$text =  $arr[1]
optimize-string $text -Start 1 -end 1

Next time we’ll look at string parsing using regular expressions, and it won’t be scary, I promise.

Palo Alto Virtual Lab

The content below is taken from the original ( Palo Alto Virtual Lab), to continue reading please visit the site. Remember to respect the Author & Copyright.

I have seen several posts asking about virtual lab costs, etc…

When I saw my daily Fuel email it reminded me of those posts.

If you are looking to become certified or just want to learn more about PAs, I would recommend joining your local Fuel User Group Chapter. go to https://fuelusergroup.org and sign up, its free. Now that you have an account you can access resources and you will get a local rep to help you on your learning journey. One of the resources you gain is free access to a virtual lab.

https://www.fuelusergroup.org/page/fuel-virtual-lab

Fuel User Group is great. Meet all kinds of IT professionals, vendors, and Palo Reps. Hold on to those reps because they have access to Palo people that you and I do not, plus they may also know other professionals with the knowledge you’re looking for. I am looking forward to getting back to our in person meetings. In the meetings they bring great PA information and yes, they do have a sponsor doing a short pitch but honestly they fit the vibe of the meeting. Time is not wasted and the swag is great!

submitted by /u/Electronic_Front_549 to r/paloaltonetworks
[link] [comments]

RISCOSbits announces ‘RISC OS Rewards’ scheme

The content below is taken from the original ( RISCOSbits announces ‘RISC OS Rewards’ scheme), to continue reading please visit the site. Remember to respect the Author & Copyright.

Also, you may or may not already know, but Ovation Pro is now free to download. RISCOSbits has launched a new initiative aimed at rewarding loyal RISC OS users for their continuing patronage by offering a discount on future purchases. The ‘RISC OS Rewards’ scheme allows for a 10% discount against the purchase of any computer system available from the PiHard website, including the ‘Fourtify’ option that provides a way to upgrade an existing Raspberry Pi 4 system to one of those on offer. There are two main ways to…

Learn How to Switch to Modern Authentication in Office 365

The content below is taken from the original ( Learn How to Switch to Modern Authentication in Office 365), to continue reading please visit the site. Remember to respect the Author & Copyright.

Hey guys,

u/junecastillote just wrote a new blog post you may enjoy on the ATA blog.

"Learn How to Switch to Modern Authentication in Office 365"

Summary: Enhance your IT organizations security and capabilities by switching to modern authentication in Office 365 in this ATA Learning tutorial!

https://adamtheautomator.com/modern-authentication-in-office-365/

submitted by /u/adbertram to r/Cloud
[link] [comments]

New pfSense docs: Configuring pfSense Software for Online Gaming

The content below is taken from the original ( New pfSense docs: Configuring pfSense Software for Online Gaming), to continue reading please visit the site. Remember to respect the Author & Copyright.

As of July there are now pfSense gaming reference configurations for Xbox, Playstation, Nintendo Switch/Wii, Steam and Steam Deck, including the NAT and UPNP settings you will need to enable for optimal NAT types.

Learn more:

https://docs.netgate.com/pfsense/en/latest/recipes/games.html

submitted by /u/crashj to r/PFSENSE
[link] [comments]

Sonos Move wigging out while we were out of town. Any ideas?

The content below is taken from the original ( Sonos Move wigging out while we were out of town. Any ideas?), to continue reading please visit the site. Remember to respect the Author & Copyright.

Sonos Move wigging out while we were out of town. Any ideas? submitted by /u/DeLa_Sun to r/sonos
[link] [comments]

[Script sharing] Find all Office 365 Inbox rules that forwards emails to external users

The content below is taken from the original ( [Script sharing] Find all Office 365 Inbox rules that forwards emails to external users), to continue reading please visit the site. Remember to respect the Author & Copyright.

submitted by /u/Kathiey to r/usefulscripts
[link] [comments]

How to explain what an API is – and why they matter

The content below is taken from the original ( How to explain what an API is – and why they matter), to continue reading please visit the site. Remember to respect the Author & Copyright.

Some of us have used them for decades, some are seeing them for the first time on marketing slides

Systems Approach Explaining what an API is can be surprisingly difficult.…

New: CredHistView v1.00

The content below is taken from the original ( New: CredHistView v1.00), to continue reading please visit the site. Remember to respect the Author & Copyright.

Every time that you change the login password on your system, Windows stores the hashes of the previous password in the CREDHIST file (Located in %appdata%\Microsoft\Protect\CREDHIST ) This tool allows you to decrypt the CREDHIST file and view the SHA1 and NTLM hashes of all previous passwords you used on your system. In order to decrypt the file, you have to provide your latest login password. You can this tool to decrypt the CREDHIST file on your currently running system, as well as to decrypt the CREDHIST stored on external hard drive.

Microsoft Rolls Out Dynamic Administrative Units Support for Azure AD

The content below is taken from the original ( Microsoft Rolls Out Dynamic Administrative Units Support for Azure AD), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft has announced the public preview of dynamic administrative units with Azure Active Directory (Azure AD). The new feature lets organizations configure rules for adding or deleting users and devices in administrative units (AUs).

Azure AD administrative units launched in public preview back in 2020. The feature lets enterprise admins logically divide Azure AD into multiple administrative units. Specifically, an administrative unit is a container that can be used to delegate administrative permissions to a subset of users.

Microsoft Rolls Out Dynamic Administrative Units Support for Azure AD

Previously, IT Admins were able to manage the membership of administrative units in their organization manually. The new dynamic administrative units feature now enables IT Admins to specify a rule to automatically perform the addition or deletion of users and devices. However, this capability is currently not available for groups.

The firm also adds that all members of dynamic administrative units are required to have Azure AD Premium P1 licenses. This means that if a company has 1,000 end-users across all dynamic administrative units, it would need to purchase at least 1,000 Azure AD Premium P1 licenses.

“Using administrative units requires an Azure AD Premium P1 license for each administrative unit administrator, and an Azure AD Free license for each administrative unit member. If you are using dynamic membership rules for administrative units, each administrative unit member requires an Azure AD Premium P1 license,” Microsoft noted on a support page.

How to create dynamic membership rules in Azure AD

According to Microsoft, IT Admins can create rules for dynamic administrative units via Azure portal by following these steps:

  1. Select an administrative unit and click on the Properties tab.
  2. Set the Membership Type to Dynamic User or Dynamic Device and click the Add dynamic query option.
  3. Now, use the rule builder to create the dynamic membership rule and click the Save button.
  4. Finally, click the Save button on the Properties page to save the membership changes to the administrative unit.

Currently, the dynamic administrative units feature only supports one object type (either users or devices) in the same dynamic administrative unit. Microsoft adds that support for both users and devices is coming in future releases. You can head to the support documentation to learn more about dynamic administrative units.

How to Download a File using PowerShell

The content below is taken from the original ( How to Download a File using PowerShell), to continue reading please visit the site. Remember to respect the Author & Copyright.

PowerShell can download files from the Internet and your local network to your computer. Learn how to use PowerShell’s Invoke-WebRequest and Start-BitsTransfer cmdlets to download files here.

Welcome to another post on how PowerShell can assist you in your daily job duties and responsibilities. Being able to download files from the Internet and your local network with PowerShell is something I hadn’t really thought a lot about. But, just thinking about the power and scalability of PowerShell intrigues me to no end.

There are so many possibilities around scripts, downloading multiple files at the same time, auto extracting ZIP files, the list goes on and on. If you ever wanted to download the various Windows patching files from the Windows Update Catalog, you could script it if you have the exact URL.

While a bit tedious at first, you could definitely get your groove on after a little bit of tweaking and learning. But let’s first discuss prerequisites.

Prerequisites

They aren’t stringent. You just need PowerShell 5.1 or newer to use the commands in this post. Windows 10 and Windows 11 already include at least version 5.1. Windows Server 2012/R2 comes with version 4.0.

You can also simply download the latest and greatest by downloading PowerShell 7.2.x from this link. And, come to think of it, I’ll use this URL later in the article and show you how to download this file… once you have an appropriate version installed. 🙂

Use PowerShell to download a file from a local network source

Let me start by letting you know I’m utilizing my (Hyper-V) Windows Server 2022 Active Directory lab, again. I’ll be running these commands on my Windows 11 client machine.

First, let’s use the Copy-Item cmdlet to download a file from a local fileserver on my LAN. This command at a minimum just needs a source and destination. I have an ISO in my Downloads folder I need to put up on my G: drive. I’ll create two variables for the source folder and the destination folder.

$source = “c:\users\mreinders\downloads\”
$destination = “\\ws16-fs01-core\shares\folder_01\Extra\”

Then, I’ll run the command and include the -Recurse switch to copy the folder AND the ISO file inside it.

Copy-Item -path $source -destination $destination -Recurse
Using the PowerShell Copy-Item cmdlet to copy an ISO to a fileserver
Using the Copy-Item command to copy an ISO to a fileserver

As you can see, the ISO file was copied to the G: drive.

Use Powershell to download a file from the Internet

Next, let’s work on downloading files from the Internet. We can start with the Invoke-WebRequest cmdlet.

With the Invoke-WebRequest cmdlet 

As I said earlier, I can show you how to download the MSI file for the latest (as of this writing) PowerShell 7.2.2 (x64) version using Invoke-WebRequest. Again, let’s set up some variables first. We can use the general concept of a source variable and destination variable.

$url = “https://github.com/PowerShell/PowerShell/releases/download/v7.2.2/PowerShell-7.2.2-win-x64.msi
$dest = “c:\users\mreinders\downloads\Latest_Powershell.MSI”

Invoke-WebRequest -Uri $url -OutFile $dest
Using Invoke-WebRequest to download the latest PowerShell MSI installer from GitHub
Now, using Invoke-WebRequest to download the latest PowerShell MSI installer from GitHub

This 102 MB file took about 4 or 5 minutes to download, which is quite a bit longer than I would expect. That’s due to the inherent nature of this specific cmdlet. The file is buffered in memory first, then written to disk.

We downloaded the PowerShell MSI file to the Downloads folder
I downloaded the PowerShell MSI file to my Downloads folder

We can get around this inefficiency by using the Background Intelligence Transfer Service (BITS) in Windows. I’ll show you further below how to utilize all your bandwidth.

Cases when downloads require authentication

You will certainly come across files that require authentication before downloading. If this is the case, you can use the -Credential switch on Invoke-WebRequest to handle these downloads.

Let’s say there is a beta or private preview of an upcoming PowerShell version (7.3?) that requires authentication. You can utilize these commands (or create a PowerShell script) to download this hypothetical file.

# Variables
$url = "<a href="https://github.com/PowerShell/PowerShell/releases/download/v7.2.2/PowerShell-7.2.2-win-x64.msi">https://github.com/PowerShell/PowerShell/Preview/download/v7.3.0-Preview3/PowerShell-7.3.0-Preview3-win-x64.msi</a>"
$dest = "c:\users\mreinders\downloads\PowerShell-7.3.0-Preview3.MSI"

# Username and password
$username = 'mreinders'
$password = 'PleaseLetMeIn'

# Convert to a SecureString
$secPassword = ConvertTo-SecureString $password -AsPlainText -Force

# Create a Credential Object
$credObject = New-Object System.Management.Automation.PSCredential ($username, $secPassword)

# Download file
Invoke-WebRequest -Uri $url -OutFile $dest -Credential $credObject

Downloading and extracting .zip files automatically

Let’s see another example of how PowerShell can assist you with automation. We can use some more variables and a COM object to download a .ZIP file and then extract its contents to a location we specify. Let’s do this!

There’s a sample .ZIP file stored up on GitHub. We’ll store that in our $url variable. We’ll create another variable for our temporary ZIP file. Then, we’ll store the path to where the ZIP file will be extracted in a third variable.

$url = “https://ift.tt/S9zC5Dd;
$zipfile = “c:\users\mreinders\downloads\” + $(Split-Path -Path $Url -Leaf)
$extractpath = “c:\users\mreinders\downloads\Unzip”

Invoke-WebRequest -Uri $url -OutFile $zipfile
Using Invoke-WebRequest to download a ZIP file in preparation for extracting
Using Invoke-WebRequest to download a ZIP file in preparation for extracting

Now, let’s use the COM object to extract the ZIP file to our destination folder.

# Create the COM Object instance
$objShell = New-Object -ComObject Shell.Application

# Extract the Files from the ZIP file
$extractedFiles = $ObjShell.NameSpace($zipFile).Items()

# Copy the new extracted files to the destination folder
$ObjShell.NameSpace($extractPath).CopyHere($extractedFiles)
Using a COM object to extract the contents of the ZIP file
Using a COM object to extract the contents of the ZIP file

With the Start-BitsTransfer cmdlet

Now, let’s see if we can speed up file transfers with PowerShell. For that, we’ll utilize the aforementioned Background Intelligence Transfer Service. This is especially helpful as the BITS service lets you resume downloads after network or Internet interruptions.

We can use similar variables and see how long it takes to download our 102 MB PowerShell MSI installer:

$url = “https://github.com/PowerShell/PowerShell/releases/download/v7.2.2/PowerShell-7.2.2-win-x64.msi
$destination = “c:\users\mreinders\downloads\”

Start-BitsTransfer -Source $url -Destination $destination
We downloaded the MSI file in a flash using Background Intelligent Transfer Service (BITS)
We downloaded the MSI file in a flash using Background Intelligent Transfer Service (BITS)!

Ok, that went MUCH faster and finished in about 4 seconds. 🙂 The power of BITS!

Downloading multiple files with Start-BitsTransfer

To close out this post, let me show you how you can download multiple files with the Start-BitsTransfer cmdlet.

There are many websites that store sample data for many training and educational programs. I found one that includes a simple list of files – HTTP://speed.transip.nl.

We’ll parse and store the files in a variable, then start simultaneous downloads of the files asynchronously. Finally, we run the Complete-BitsTransfer command to convert all the TMP files downloaded to their actual filenames.

$url = "http://speed.transip.nl"
$content = Invoke-WebRequest -URI "http://speed.transip.nl"

$randomBinFiles = $content.links | where {$_.innerHTML -like 'random*'} | select href
# Create links for each file entry
$randomBinFiles.foreach( { $_.href = $url + "/" + $_.href })

# Download the files in the background
$randomBinFiles.foreach({
    Start-BitsTransfer ($url + "/" + $_.href) -Asynchronous
})

# Close the transfers and convert from TMP to real file names
Get-BitsTransfer | Complete-BitsTransfer

Conclusion

Well, as long as you have an exact source URL, downloading files with PowerShell is pretty easy. I can see where it would be very handy, especially on GitHub if you don’t have access to Visual Studio to merge or download something to your machine. If you’d like to see any additional examples, please leave a comment below!

Use Azure ExpressRoute Private Peering & Azure Virtual WAN to Connect Privately to Microsoft 365

The content below is taken from the original ( Use Azure ExpressRoute Private Peering & Azure Virtual WAN to Connect Privately to Microsoft 365), to continue reading please visit the site. Remember to respect the Author & Copyright.

Many Office 365 customers want to use Azure ExpressRoute to connect their on-premises network to the Microsoft cloud with a private connection. As you may know, though, Microsoft does not recommend using Azure ExpressRoute with Microsoft Peering to connect to Office 365.

There are several reasons for that, let me point out a few of them:

  • Implementing Azure ExpressRoute with Microsoft Peering for Microsoft 365 requires a highly complex routing configuration.
  • It requires the use of public IP addresses that customers own for the peering.
  • Azure ExpressRoute is normally working against the Microsoft global edge network distribution policy and breaks redundancy, as an ExpressRoute is only deployed within one location.
  • Egress costs have a high-cost implication on Azure consumption. When using Microsoft Teams, you will have high egress data.
  • Cost and scalability are usually not comparable to premium Internet connections.

You can get an overview of the different ExpressRoute circuits in the chart below, where “Microsoft Edge” describes the edge routers on the Microsoft side of the ExpressRoute circuit:

The different Azure ExpressRoute circuits

Why you may want to use Azure ExpressRoute to connect to Microsoft 365

There may be various customer scenarios where you need to use Azure ExpressRoute with Microsoft Peering enabled to connect to Microsoft 365 services. Here are two examples:

  • A customer is in an area where regular Internet connections are not available to connect to Microsoft 365, such as China.
  • A customer is in a highly-regulated environment.

There is still the option to request Subscription Whitelisting to connect to Microsoft 365 via Azure ExpressRoute, but doing so does not remove the limitations and complexities we’ve highlighted earlier.

However, there’s actually an alternative that enables customers to use Azure ExpressRoute with Microsoft Private Peering all while keeping costs down and enabling redundancy. To accomplish that, we’ll need to use a default behavior from the Microsoft Global Network in combination with some Microsoft services.

Microsoft services traffic is always transported on the Microsoft global network, as explained in the company’s documentation:

Whether connecting from London to Tokyo, or from Washington DC to Los Angeles, network performance is quantified and impacted by things such as latency, jitter, packet loss, and throughput. At Microsoft, we prefer and use direct interconnects as opposed to transit-links, this keeps response traffic symmetric and helps keep hops, peering parties and paths as short and simple as possible.

So, does that mean all traffic when using Microsoft services? Yes, any traffic between data centers, within Microsoft Azure or between Microsoft services such as Virtual Machines, Microsoft 365, Xbox, SQL DBs, Storage, and virtual networks are routed within our global network and never over the public Internet, to ensure optimal performance and integrity.

The technologies required in the solution we mentioned earlier include:

I’ll be explaining this solution in greater detail in the next segment.

Solution architecture

The architecture in this solution is quite simple: You need to deploy an Azure Virtual WAN Hub with Azure Firewall to make it secure and to use it as an Internet access point.

Deploying an Azure Virtual WAN Hub with Azure Firewall

Then, you’ll need to deploy an Azure Virtual WAN ExpressRoute gateway into the virtual WAN connection, connect your ExpressRoute Local to the gateway, and secure your Internet for that ExpressRoute. Doing so will announce a default route (0.0.0.0/0) to your on-premises infrastructure.

Deploying an Azure Virtual WAN ExpressRoute gateway into the virtual WAN connection

On your on-premises infrastructure, you can now set a static route to point to the gateway. You can also leverage newer software-defined WAN (SDWAN) or Firewall devices to use a service-based routing and only send traffic for Microsoft 365 services to our new Azure Secure Virtual WAN Hub.

Installing Azure Firewall in a Virtual WAN hub

The diagram below shows what this architecture looks like:

The solution architecture

We still have to deal with the fact that an ExpressRoute circuit is not georedundant as it is only deployed in one Edge co-location. To establish the necessary redundancy, you’ll need to build additional circuits.

Implementing redundancy and global deployment

To implement a highly-available architecture and improve latency for your users, you should distribute additional hubs. I would suggest creating ExpressRoute circuits in different local Azure regions such as Germany Frankfurt and West Europe Amsterdam. Microsoft has a dedicated page where you can find all possible Azure locations, and the company also has detailed documentation explaining how to implement redundancy for Azure ExpressRoute.

You have two options from that point on: The first one is to create two separate circuits connected to two separate Azure Virtual WAN hubs, as shown below.

Two separate circuits connected to two separate Virtual WAN hubs

Another option is to interconnect both Virtual WAN hubs, as shown in the schema below:

We can also implementing redundancy by interconnecting both Virtual WAN hubs

In the case of inter-hub connectivity, you need to disable branch-to-branch connectivity within the virtual WAN Hub properties. Branch-to-branch is currently not supported when using ExpressRoute Local, so you need to disable it on the Virtual WAN Hub level.

We're disabling Branch-to-branch connectivity as it's not supported in the architecture we're using

With that architecture, you will get a private, redundant, and high performant connection to Microsoft 365 Services.

I also want to make you aware that Microsoft announced additional security capabilities integrating network virtual appliances into virtual WANs. You can watch the announcement from Microsoft for that solution on YouTube.

Cost Calculation

In this section, I will provide a short cost calculation for this solution. Please be aware that there are two parts of the ExpressRoute Local Service to take into account:

  • The Microsoft Service costs
  • The data center and/or network service provider costs.

I can only provide you with the Microsoft part of the calculation, as the data center and network service provider costs can vary a lot.

The redundant solution includes the following components:

  • Two Virtual WAN Hubs including Azure Firewall
  • Two Virtual WAN gateways for ExpressRoute
  • Two Azure ExpressRoute local circuits
  • Traffic of around 10 TB per hub per month
Service type Description Estimated monthly cost Estimated upfront cost
Virtual WAN West Europe, Secured Virtual WAN Hub with Azure Firewall; 730 Deployment hours, 10 GB of data processed; Connections $1.806,74 $0,00
Virtual WAN Germany West Central, Secured Virtual WAN Hub with Azure Firewall; 730 Deployment hours, 10 GB of data processed; Connections $1.806,74 $0,00
Azure ExpressRoute ExpressRoute, Zone 1, Local; 1 Gbps Circuit x 1 circuit $1.200,00 $0,00
Azure ExpressRoute ExpressRoute, Zone 1, Local; 1 Gbps Circuit x 1 circuit $1.200,00 $0,00
Support Included $0,00 $0,00
Total $6.013,48 $0,00

You can check out my cost calculation on Microsoft’s Azure website, feel free to use it as an example for your own calculations.

Conclusion

As you can see, it is still possible to use Azure ExpressRoute to connect privately to Microsoft 365 and other Microsoft Cloud services, but it comes at a price. If you require additional security and don’t want a solution that allows routing through the Internet or other carrier networks, you could leverage that solution.

IT provider Stone Group ramps up fight against electronic waste

The content below is taken from the original ( IT provider Stone Group ramps up fight against electronic waste), to continue reading please visit the site. Remember to respect the Author & Copyright.

IT provider Stone Group ramps up fight against electronic waste

Stone Group has revealed that over half a million items of unwanted tech hardware have been saved from landfill due to its app.

The circular IT provider says its Stone 360 app has been downloaded by 11,000 businesses and helps organisations arrange the responsible disposal of unwanted IT assets at the touch of a screen.

Any used hardware including monitors, laptops, desktops, printers, and servers that cannot be refurbished are fully broken down to their core components and recycled.

From April, Stone’s IT asset disposal (ITAD) facility will operate 24×7 to keep up with demand for its recycling services.

Stone Group will also soon be launching the second iteration of the Stone 360 app and said the latest version will help organisations “meet important regulations on electronic waste disposal”.

It will help users classify any items that contain harmful substances and identify those that can be successfully refurbished.

On release, the new version of the Stone 360 app will be available for free download on both iOS and Android devices.

Craig Campion, director of ITAD sales at Stone Group said: “We all need to do more to protect our planet, but unfortunately more and more electronic waste is being created every day and recycling levels are just not keeping pace.

“As a provider of IT to the public and private sector, we are committed to playing a significant role in helping organisations dispose of their end-of-life IT in the right way. The Stone 360 app has been a revolutionary in helping our customers increase their recycling efforts by enabling quick and easy collections and responsible disposal of unwanted items.   

“We’ve recently seen our multi-award-winning app reach over 3,000 businesses with a workforce of over four million, the majority of whom will use some form of IT hardware. We are aiming to at least double the reach of the Stone 360 app this year and we anticipate that the addition of our new functionality to help organisations comply with Government legislation on IT disposal will drive this.”

 

Vaccinating a nation: Vaccination app delivery in 30 days with Cloud Spanner

The content below is taken from the original ( Vaccinating a nation: Vaccination app delivery in 30 days with Cloud Spanner), to continue reading please visit the site. Remember to respect the Author & Copyright.

As the most specialized provider of cloud computing solutions in Poland, Chmura Krajowa (OChK) works to accelerate the digital transformation of Polish businesses and public institutions. In November 2020, the Polish government handed us a formidable challenge: starting from scratch and within 30 days, design and deploy an application to help vaccinate every citizen in Poland against COVID-19. Using Google Cloud products like Cloud Spanner to power the application, we met our goal, and the citizens of Poland are now better protected from the coronavirus pandemic. 

Defining the challenge

We were under considerable pressure to deliver an application that worked as expected and ran without errors or downtime, all with citizens, the government, and the media watching. 

Because the business requirements of the vaccination programme kept evolving in response to the changing situation, the system had to be modified with particular agility. The time pressure was exceptionally high – changes and new functionalities were implemented within hours.

The solution required three systems in one platform:

  • One for 100,000 medical workers at 9,000 vaccination sites to run their own site-specific calendars and manage vaccination schedules. This was a complex undertaking, as vaccinations often required multiple doses,and people could choose different vaccines depending on their age and actual legislation at the given moment. 

  • One for call centers where live operators could schedule appointments for callers. At the peak, about 2,000 operators worked 24/7 in shifts to field calls from across Poland.

  • One for 36 million eligible users to go online via the web, a mobile device or an SMS gateway and Interactive voice response (IVR) to schedule their own appointments using a country-wide authentication scheme and to access follow-up care and resources. 

The scalability’s requirements were unpredictable for the third system for 36 million eligible users. The government could plan for the number and behavior of trained medical workers and call center operators, but not for the number and behavior of citizens scheduling their own appointments at the peak of a worldwide pandemic. This became more challenging as the vaccine roll-out progressed and more people became eligible, including those with greater familiarity with the internet and technology. On days when eligibility widened, huge numbers of citizens wanted to be among the first to get their vaccinations from a limited set of available calendar slots. 

Partnering with Google Cloud

To succeed within such a narrow timeframe, we needed to collaborate with a capable and experienced cloud provider. Given our experience working with Google on earlier projects for the Ministry of Health, Google Cloud was the clear and easy choice as it allowed us to focus on the project details, while being able to entrust virtually all infrastructure and scalability needs to Google.

Using Google-native Go programming language, we employed a wide range of Google Cloud solutions in building the application, including Google Kubernetes Engine (GKE) for backend services, like scheduling vaccination site staff, as well as external services and web requests.

Spanner played a key role in the application’s architecture, meeting the project’s most critical needs:

  • High availability to avoid downtime due to maintenance.

  • Strong consistency due to the transactional nature of the reservation system so that everyone has the same view of the data at a given point-in-time.

  • Horizontal scalability to meet the demands of a program designed to be used by millions of citizens.

Of these three needs, scalability was the most important. We felt secure knowing that during peak traffic, we would not have to worry about the database scalability. 

Architecting the solution

The Spanner database schema consists of 30 tables, some of which are particularly crucial, like the table used by every vaccination site to define its calendar and specify appointment slots.The Ministry of Health workers at the vaccination sites are responsible for generating slots for a specified time span, using the back office system we provided them. The amount of data in  the table grew quickly to hundreds of millions of rows with thousands of rows per second. 

Integrating the suite of Google Cloud data solutions

Other Google products in the application stack include Memorystore for Redis for caching, storing user sessions and rate limiting. While it’s unfeasible to cache the available slots because they change so quickly, it’s easier to cache empty search results for given criteria, so if a user is looking for a certain location where there are no slots at a given time, this information can be temporarily cached to speed up the response. 

The stack also uses Pub/Sub for asynchronous messaging and Firestore for maintaining application configuration. For reporting, we run a Dataflow job that mirrors the Spanner database into BigQuery, and then another job mirroring data into the Ministry of Health’s internal data warehouse .  

For day-to-day internal reporting, we use Data Studio connected directly to BigQuery. The Data Studio reports are also used by external parties responsible for creating vaccination strategies to help manage the availability of vaccines and allocate resources.

Meeting the challenge 

We definitely see this project as a success. We deployed and managed it in record time without any major errors or outages, our client is satisfied, and most Polish citizens have now been vaccinated using our system either directly or indirectly. Projects don’t get any more exciting or challenging, and this one directly benefited the public health of our nation, which is extremely important to us. With Google Cloud products like Spanner, GKE, Pub/Sub, Dataflow, and BigQuery underpinning this critical application, we were able to deliver on our promise.


Learn more about Chmura Krajowa (OChK) and Cloud Spanner.

Related Article

COLOPL, Minna Bank and 7-Eleven Japan use Cloud Spanner to solve digital transformation challenges

COLOPL, Minna Bank and 7-Eleven Japan use Cloud Spanner to solve their scalability, performance and digital transformation challenges.

Read Article

AWS Partner Network (APN) – 10 Years and Going Strong

The content below is taken from the original ( AWS Partner Network (APN) – 10 Years and Going Strong), to continue reading please visit the site. Remember to respect the Author & Copyright.

AWS 10 Years with animated flamesTen years ago we launched AWS Partner Network (APN) in beta form for our partners and our customers. In his post for the beta launch, my then-colleague Jinesh Varia noted that:

Partners are an integral part of the AWS ecosystem as they enable customers and help them scale their business globally. Some of our greatest wins, particularly with enterprises, have been influenced by our Partners.

A decade later, as our customers work toward digital transformation, their needs are becoming more complex. As part of their transformation they are looking for innovation, differentiating solutions, and routinely ask us to refer them to partners with the right skills and the specialized capabilities that will help them to make the best of use AWS services.

The partners, in turn, are stepping up to the challenge and driving innovation on behalf of their customers in ways that transform multiple industries. This includes migration of workloads, modernization of existing code & architectures, and the development of cloud-native applications.

Thank You, Partners
AWS Partners all around the world are doing amazing work! Integrators like Presidio in the US, NEC in Japan, Versent in Australia, T-Systems International in Germany, and Compasso UOL in Latin America are delivering some exemplary transformations on AWS. On the product side, companies like Megazone Cloud (Asia/Pacific) are partnering with global ISVs such as Databricks, Datadog, and New Relic to help them go to market. Many other ISV Partners are working to reinvent their offerings in order to take advantage of specific AWS services and features. The list of such partners is long, and includes Infor, VTEX, and Iron Mountain, to name a few.

In 2021, AWS and our partners worked together to address hundreds of thousands of customer opportunities. Partners like Snowflake, logz.io, and Confluent have told us that AWS Partner program such as ISV Accelerate and AWS Global Startup Program are having a measurable impact on their businesses.

These are just a few examples (we have many more success stories), but the overall trend should be pretty clear — transformation is essential, and AWS Partners are ready, willing, and able to make it happen.

As part of our celebration of this important anniversary, the APN Blog will be sharing a series of success stories that focus on partner-driven customer transformation!

A Decade of Partner-Driven Innovation and Evolution
We launched APN in 2012 with a few hundred partners. Today, AWS customers can choose offerings from more than 100,000 partners in more than 150 countries.

A lot of this growth can be traced back to our first Leadership Principle, Customer Obsession. Most of our services and major features have their origins in customer requests and APN is no different: we build programs that are designed to meet specific, expressed needs of our customers. Today, we continue to seek and listen to partner feedback, use that feedback to innovate and to experiment, and to get it to market as quickly as possible.

Let’s take a quick trip through history and review some of the most interesting APN milestones of the last decade:

In 2012, we first announced the partner type (Consulting and Technology) model when APN came to life. With each partner type, partners could qualify for one of the three tiers (Select, Advanced, and Premier) and achieve benefits based on their tier.

In 2013, AWS Partners told us they wanted to stand out in the industry. To allow partners to differentiate their offerings to customers and show expertise in building solutions, we introduced the first two of what is now a very long list of competencies.

In 2014, we launched the AWS Managed Service Provider Program to help customers find partners who can help with migration to the AWS cloud, along with the AWS SaaS Factory program to support partners looking to build and accelerate delivery of SaaS (Software as a Service) solutions on behalf of their customers. We also launched the APN Blog channel to bring partner success stories with AWS and customers to life. Today, the APN Blog is one of the most popular blogs at AWS.

Next in 2016, customers started to ask us where to go when looking for a partner that can help design, migrate, manage, and optimize their workloads on AWS, or for partner-built tools that can help them achieve their goals. To help them more easily find the right partner and solution for their specific business needs, we launched the AWS Partner Solutions Finder, a new website where customers could search for, discover, and connect with AWS Partners.

In 2017, to allow partners to showcase their earned AWS designations to customers, we introduced the Badge Manager. The dynamic tool allows partners to build customized AWS Partner branded badges to highlight their success with AWS to customers.

In 2018, we launched several new programs and features to better support our partners gain AWS expertise and promote their offerings to customers including AWS Device Qualification program, AWS Well-Architected Partner Program, and several competencies.

In 2019, for mid-to-late stage startups seeking support with product development, go-to-market and co-sell, we launched the AWS Global Startup program. We also launched the AWS Service Ready Program to help customers find validated partner products that work with AWS services.

Next, in 2020 to help organizations co-sell, drive new business and accelerate sales cycles we launched the AWS ISV Accelerate program.

In 2021 our partners told us that they needed more (and faster) ways to work with AWS so that they could meet the ever-growing needs of their customers. We launched AWS Partner Paths in order to accelerate partner engagement with AWS.

Partner Paths replace technology and consulting partner type models—evolving to an offering type model. We now offer five Partner Paths—Software Path, Hardware Path, Training Path, Distribution Path, and Services Path—which represents consulting, professional, managed, or value-add resale services. This new framework provides a curated journey through partner resources, benefits, and programs.

Looking Ahead
As I mentioned earlier, Customer Obsession is central to everything that we do at AWS. We see partners as our customers, and we continue to obsess over ways to make it easier and more efficient for them to work with us. For example, we continue to focus on partner specialization and have developed a deep understanding of the ways that our customers find it to be of value.

Our goal is to empower partners with tools that make it easy for them to navigate through our selection of enablement resources, benefits, and programs and find those that help them to showcase their customer expertise and to get-to-market with AWS faster than ever. The new AWS Partner Central (login required) and AWS Partner Marketing Central experiences that we launched earlier this year are part of this focus.

To wrap up, I would like to once again thank our partner community for all of their support and feedback. We will continue to listen, learn, innovate, and work together with you to invent the future!

Jeff;

Microsoft Simplifies IT Monitoring with New Azure Managed Grafana Service

The content below is taken from the original ( Microsoft Simplifies IT Monitoring with New Azure Managed Grafana Service), to continue reading please visit the site. Remember to respect the Author & Copyright.

Last year, Microsoft unveiled its plans to create a fully-managed version of Grafana that runs natively on its Azure cloud platform. Now, the Redmond giant has announced that the new Azure Managed Grafana service is now available in public preview.

Grafana is basically an open-source platform that enables organizations to visualize multiple types of reliability data in a single dashboard. It provides graphs, charts, and alerts that simplify the task of detecting technical issues in business environments. Previously, enterprise customers used the self-managed open-source product to deploy Grafana on Azure.

The new Azure Managed Grafana enables organizations to access the platform without managing the underlying infrastructure. It helps IT Admins detect technical issues across on-premises and Azure environments, as well as other cloud platforms.

“Grafana helps you bring together metrics, logs and traces into a single user interface. With its extensive support for data sources and graphing capabilities, you can view and analyze your application and infrastructure telemetry data in real-time,” Microsoft explained in a support document.

Microsoft Simplifies IT Monitoring with New Azure Managed Grafana Service

Azure Monitor gets new Grafana integrations

In addition to the new service, Microsoft has announced some new Grafana integrations with Azure Monitor. It is now possible to quickly pin Azure Monitor visualizations from Azure Portal to new and existing Grafana dashboards.

Moreover, the new Azure Grafana service has built-in support for Azure Data Explorer, a real-time data analytics and data exploration service for large volumes of streaming data. With this service, customers can view the telemetry data of connected devices right from the Grafana dashboard.

Microsoft has introduced new “out-of-the-box” Grafana dashboards that make it easier for customers to visualize data from Azure Monitor. These dashboards come with several built-in features such as Azure Monitor insights, Azure alerts, and much more. This feature should eliminate the need to create data visualizations from scratch.

Lastly, Microsoft highlights that the new service also integrates with Azure Active Directory. It allows organizations to easily manage user permissions and access control via Azure Active Directory identities. This integration should make it easier for IT Admins to secure their Azure Managed Grafana deployments.

Microsoft is offering a free 30 days trial of its Azure Managed Grafana service, and you can find more details on the official website.

Departing Space Force chief architect likens Pentagon’s tech acquisition to a BSoD

The content below is taken from the original ( Departing Space Force chief architect likens Pentagon’s tech acquisition to a BSoD), to continue reading please visit the site. Remember to respect the Author & Copyright.

US military must ‘ride the wave of commercial innovation … or risk drowning under its own weight’

The outgoing first chief architect officer of the US Air and Space Force urged the Pentagon to lay off wasting time building everything itself, and use commercial kit if available and appropriate to upgrade its technological capabilities quickly.…

Microsoft Authenticator now lets you generate strong passwords

The content below is taken from the original ( Microsoft Authenticator now lets you generate strong passwords), to continue reading please visit the site. Remember to respect the Author & Copyright.

New: ExtPassword! v1.00

The content below is taken from the original ( New: ExtPassword! v1.00), to continue reading please visit the site. Remember to respect the Author & Copyright.

ExtPassword! is tool for Windows that allows you to recover passwords stored on external drive plugged to your computer.
ExtPassword! can decrypt and extract multiple types of passwords and essential information, including passwords of common Web browsers, passwords of common email software, dialup/VPN passwords, wireless network keys, Windows network credentials, Windows product key, Windows security questions.
This tool might be useful if you have a disk with Windows operating system that cannot boot anymore, but most files on this hard drive are still accessible and you need to extract your passwords from it.

Microsoft expands cybersecurity skills training to 23 new countries

The content below is taken from the original ( Microsoft expands cybersecurity skills training to 23 new countries), to continue reading please visit the site. Remember to respect the Author & Copyright.

How to Install Windows Server 2022 Step by Step

The content below is taken from the original ( How to Install Windows Server 2022 Step by Step), to continue reading please visit the site. Remember to respect the Author & Copyright.

What Is Windows Server 2022?

Windows Server 2022 is Microsoft’s latest version of Windows Server in the Long-Term Servicing Channel (LTSC). They release new versions on this channel about every three years or so.

The most recent version before Windows Server 2022 was Windows Server 2019. These releases receive ten full years of technical support from Microsoft via Mainstream support (through 10/13/2026) and Extended support (through 10/14/2031).

Over the past few years, the Windows Server team released Windows Server Core releases with breaking new features – these were from the Semi-Annual Channel (SAC). The last release was Windows Server, version 20H2.

These releases are supported for 18 months. So, after August 9th of this year (2022), Microsoft will no longer offer any support for the Semi-Annual Channel of Windows Server.

What are the new features in Windows Server 2022?

Windows Server 2022 is built on the strong foundation of Windows Server 2019 and brings several innovations around three pillars: security, Azure hybrid integration and management, and application platform enhancements. Let’s go through some of the more substantial areas of improvement and innovation.

Secured-core server

A Secured-core server uses firmware, hardware, and driver capabilities to enable advanced security features for Windows Server. The overall design goal is to provide additional security protections that are useful against sophisticated and coordinated attacks.

Transport: HTTPS and TLS 1.3 enabled by default

Secure connections are at the heart of today’s systems on your network and the Internet. Transport Layer Security (TLS) 1.3 is the latest and most secure version of the Internet’s most deployed security protocol.

With HTTPS and TLS 1.3 enabled by default, protecting the data of clients connecting to the server is more streamlined and inherently automatic. To learn more about verifying your applications and services are ready for TLS 1.3, please visit Microsoft’s Security Blog.

Azure Arc enabled Windows Servers

Azure Arc enabled servers with Windows Server 2022 bring on-premises and multi-cloud Windows servers to Azure. The management experience is designed to be consistent whether you’re managing Azure virtual machines or hybrid Windows Server 2022 in your datacenters.

Application platform

There are many platform improvements for Windows Containers. The most impactful enhancements include application compatibility and the Windows Container experience with Kubernetes.

A welcome optimization effort was undertaken and now affords IT Pros a 30% faster startup time and better performance thanks to Microsoft engineers reducing the footprint by up to 40%.

You can now run applications that depend on Azure Active Directory with group Managed Services Accounts (gMSA) without domain joining the container host. In addition, Windows Containers now support Microsoft Distributed Transaction Control (MSDTC) and Microsoft Message Queuing (MSMQ).

Kubernetes also receives some welcome enhancements, including support for host-process containers for node configuration, IPv6, and consistent network policy implementation with Calico.

Microsoft Edge

For the first time in a long time, Internet Explorer is being replaced with Microsoft Edge as the default browser in Windows Server! However, the Internet Explorer application is still included for legacy compatibility.

Prerequisites

Hardware requirements

Because of the highly diverse scope of potential deployments of Windows Server, these guidelines should be considered when planning for your installations and scenarios for Windows Server 2022. These are most pertinent for installing on a physical host or a physical server. These generally include both the Server Core and Server with Desktop Experience installation options.

  1. Processor
    • 1.4 GHz 64-bit processor
    • Compatible with x64 instruction set
    • Supports NX and DEP
    • Supports CMPXCHG16b, LAHF/SAHF, and PrefetchW
    • Supports Second Level Address Translation (EPT or NPT)
  2. Memory (RAM)
    • 512 MB (2 GB for Server with Desktop Experience)
    • ECC (Error Correcting Code) or similar technology for physical deployments
  3. Storage Controller and Disk Space
    • PCI Express architecture specification storage controller
    • 32 GB disk space (minimum for Server Core and IIS Role installed)
  4. Network requirements (adapter)
    • Ethernet adapter capable of at least 1 Gbps throughput
    • PCI Express architecture
  5. Other requirements
    • DVD Drive (if you intend to install Windows using DVD media)
    • UEFI 2.3.1c-based system and firmware to support Secure Boot
    • Trusted Platform Module (TPM)
    • Graphics device and monitor capable of Super VGA (1024×768) or higher resolution
    • Keyboard, Mouse, Internet Access

Installation options

There are two installation options for Windows Server 2022 – Server Core installation (recommended by Microsoft) and Server with Desktop Experience. Let’s go through the basics and pros/cons of each.

Server with Desktop Experience vs. Server Core

For the past 30 years or so, versions of Windows Server were installed with a GUI (Graphical User Interface), now called the Desktop Experience. Starting with Windows Server 2008, Microsoft added a new ‘Server Core’ option that removes the GUI/Desktop environment from the installation and saves disk space, memory usage, security attack footprint, among other enhancements.

Their design goals were to give you a leaner Windows Server footprint, and have you manage it remotely. Efficiency and Security are the primary pros.

The main con to the Server Core option is the ease of manageability, but only at first. You can’t use Remote Desktop Protocol (RDP) to login to the server, install a server role or feature, or run Windows Update from the Control Panel.

However, as you migrate your ‘server management’ methodology to how Microsoft recommends it, you’ll probably find it actually works out better in terms of efficiency and ease of use. You can use Windows Admin Center to install roles and features, check how much disk space is free on your C: drive, and even check for Updates, install them, and schedule the reboot!

You can check out our separate guide on how to install Windows Server 2022 Core.

How to get installation media

There are several methods to download Windows Server and obtain the ISO file/installation media for Windows Server 2022:

Installing Windows Server 2022

OK, enough of my yakking… let’s boogie!

I wrote a series of articles in the summer of 2021 around upgrading my Hyper-V lab of Active Directory to Windows Server 2022. For this article, I have created a new Hyper-V virtual machine and will use this Windows Server VM to install Windows Server 2022.

After configuring the VM with my installation ISO, I started the VM and pressed a key to boot from the ISO.

Windows Server 2022 Setup - Initial Step
Windows Server 2022 Setup – Initial Step

Here, I click Next. And then Install now. (Notice the title bar – instead of saying Windows Server 2016 or Windows Server 2019, it now says ‘Microsoft Server Operating System Setup’).

Click 'Install now' to begin
Click ‘Install now’ to begin

Here, you can optionally enter your product key. This could be a retail key, a Volume License Key (VLK) from your organization, or an evaluation key. Note – You don’t have to enter one now… just click ‘I don’t have a product key’ if you want to handle this post-setup.

Activate Windows Server now or after the installation process is done
Activate Windows now… or later

Here, depending on what installation ISO you obtained, you’ll choose your product edition and the installation type. Here, I will choose ‘Windows Server 2022 Datacenter (Desktop Experience).

Choose what edition and installation option you want for the OS here
Choose what edition and installation option for Windows Server here

Check the box to accept the license terms and click Next.

You should read ALL the license terms before clicking Next
You should read ALL the license terms before clicking Next – 😉

Next, for what type of installation to choose, we’ll go with ‘Custom‘ as we are doing a clean installation with no existing operating system.

Choose  between the Upgrade or Custom installation options
Choose Upgrade or Custom to install Windows Server 2022

You’ll have a single, unallocated partition when doing a clean install, so click Next.

Choosing where to install the OS
Choosing where to install Windows Server 2022

Now, Setup will copy all the installation files, unpack the image, detect devices, and bring you to your first post-setup task.

Microsoft Server Operating System Setup is proceeding
Microsoft Server Operating System Setup is proceeding…

Post-setup tasks

The first step is to create the local Administrator password. Out of the box, this will require a complex password, so make sure it’s difficult enough to crack.

Creating a complex password for the local 'Administrator' account
Creating a complex password for the local ‘Administrator’ account

Then, press Ctrl-Alt-Del, or your virtual equivalent, and log in.

The Windows Server 2022 Lock Screen
The Windows Server 2022 Lock Screen
First screen after logging into Windows Server 2022
The first screen after logging into Windows Server 2022

You will notice that Server Manager launches automatically and reminds you to use Windows Admin Center to manage the server.

See? Microsoft recommends you use the Server Core option so you’re not logging into your servers needlessly. If you won’t do that, though, they will at least ask you to use Windows Admin Center to manage the server remotely. Yes, they are persistent…

Quick Tip – If you don’t want Server Manager to launch upon login, click the Manage menu in the upper right corner, then click Server Manager Properties. Next, check the box ‘Do not start Server Manager automatically at logon‘.

How to prevent Server Manager from loading every time you log on to the server
How to prevent Server Manager from loading every time you log on to the server

How to configure your network

There are quite a few post-setup tasks to perform, but because our environments are so diverse, it’s best to stick to the core tasks that are the most crucial to get your server on your network as soon as possible.

First, the network: It is highly likely that you will be setting a static IP address for your server. To perform this setup, in the Server Manager, click the ‘Configure this local server‘ link at the top.

The 'Local Server' Dashboard in Server Manager
The ‘Local Server’ dashboard in Server Manager

Then, click the hyperlink next to your Ethernet adapter (You may very well have more than one).

Network Connections (Control Panel)
Network Connections (Control Panel)

Right-click on the adapter and click Properties.

Now, click on ‘Internet Protocol Version 4 (TCP/IPv4)’ and click Properties.

Defining a static IP Address for your server's (first) Ethernet adapter
Defining a static IP Address for your server’s (first) Ethernet adapter

Go ahead and click ‘Use the following IP address‘ and enter your pertinent information. You and/or your network team or virtualization team should have the appropriate information to enter here.

Click OK and be sure to test and validate network and Internet access (if that is your design intent) before proceeding.

How to install the latest Windows Server updates

Before putting your new server into testing and/or production, you’ll want to run Windows Update to get the latest security fixes, bug fixes, and any new features.

To do this, back in Server Manager, you can click the hyperlinks in the upper right corner that describe ‘Last installed updates‘, ‘Windows Update‘. This will open the Settings -> Windows Update menu.

How to install the latest Windows Server updates
Start -> Settings -> Update & Security -> Windows Update

Click ‘Check for updates’ and you’ll get the latest cumulative updates for Windows Server, .NET Framework, etc.

After Windows Update prompts you to reboot, go ahead and reboot. Then, for good measure, open Windows Update again, click Check for Updates, just in case there are any more missed during the first run.

Windows Update in the process of downloading and installing the latest patches and updates
Windows Update – In process of downloading and installing the latest patches and updates…

Conclusion

You should now be educated and ready to start installing Windows Server 2022 in a test environment to begin validating your applications and services and see how they work on Windows Server 2022.

If you need more information about Windows Server 2022, Microsoft provides a lot of helpful and solid resources about the latest version of its Server OS. I recommend you to start on the Windows Server Documentation website.

Pixelating Text Not a Good Idea

The content below is taken from the original ( Pixelating Text Not a Good Idea), to continue reading please visit the site. Remember to respect the Author & Copyright.

People have gotten much savvier about computer security in the last decade or so. Most people know that sending a document with sensitive information in it is a no-no, so many people try to redact documents with varying levels of success. A common strategy is to replace text with a black box, but you sometimes see sophisticated users pixelate part of an image or document they want to keep private. If you do this for text, be careful. It is possible to unredact pixelated images through software.

It appears that the algorithm is pretty straightforward. It simply guesses letters, pixelates them, and matches the result. You do have to estimate the size of the pixelation, but that’s usually not very hard to do. The code is built using TypeScript and while the process does require a little manual preparation, there’s nothing that seems very difficult or that couldn’t be automated if you were sufficiently motivated.

You don’t see it as often as you used to, but there have been a slew of legal and government scandals where someone redacted a document by putting a black box over a PDF so it was hidden when printed but the text was still in the document. Older wordprocessors often didn’t really delete text, either, if you knew how to look at the files. The Facebook valuation comes to mind. Not to mention that the National Legal and Policy Center was stung with poor redaction techniques.

Generally available: Direct enterprise agreement on Azure Cost Management and Billing

The content below is taken from the original ( Generally available: Direct enterprise agreement on Azure Cost Management and Billing), to continue reading please visit the site. Remember to respect the Author & Copyright.

Manage your enrollment hierarchy, view account usage, and monitor costs directly from the Azure Cost Management and Billing menu on the Azure Portal (for direct enterprise agreement customers on commercial cloud).