>

New pfSense docs: Configuring pfSense Software for Online Gaming

The content below is taken from the original ( New pfSense docs: Configuring pfSense Software for Online Gaming), to continue reading please visit the site. Remember to respect the Author & Copyright.

As of July there are now pfSense gaming reference configurations for Xbox, Playstation, Nintendo Switch/Wii, Steam and Steam Deck, including the NAT and UPNP settings you will need to enable for optimal NAT types.

Learn more:

https://docs.netgate.com/pfsense/en/latest/recipes/games.html

submitted by /u/crashj to r/PFSENSE
[link] [comments]

Posted on in category News

Sonos Move wigging out while we were out of town. Any ideas?

The content below is taken from the original ( Sonos Move wigging out while we were out of town. Any ideas?), to continue reading please visit the site. Remember to respect the Author & Copyright.

Sonos Move wigging out while we were out of town. Any ideas? submitted by /u/DeLa_Sun to r/sonos
[link] [comments]

Posted on in category News

[Script sharing] Find all Office 365 Inbox rules that forwards emails to external users

The content below is taken from the original ( [Script sharing] Find all Office 365 Inbox rules that forwards emails to external users), to continue reading please visit the site. Remember to respect the Author & Copyright.

submitted by /u/Kathiey to r/usefulscripts
[link] [comments]

Posted on in category News

How to explain what an API is – and why they matter

The content below is taken from the original ( How to explain what an API is – and why they matter), to continue reading please visit the site. Remember to respect the Author & Copyright.

Some of us have used them for decades, some are seeing them for the first time on marketing slides

Systems Approach Explaining what an API is can be surprisingly difficult.…

Posted on in category News

New: CredHistView v1.00

The content below is taken from the original ( New: CredHistView v1.00), to continue reading please visit the site. Remember to respect the Author & Copyright.

Every time that you change the login password on your system, Windows stores the hashes of the previous password in the CREDHIST file (Located in %appdata%\Microsoft\Protect\CREDHIST ) This tool allows you to decrypt the CREDHIST file and view the SHA1 and NTLM hashes of all previous passwords you used on your system. In order to decrypt the file, you have to provide your latest login password. You can this tool to decrypt the CREDHIST file on your currently running system, as well as to decrypt the CREDHIST stored on external hard drive.

Posted on in category News

Microsoft Rolls Out Dynamic Administrative Units Support for Azure AD

The content below is taken from the original ( Microsoft Rolls Out Dynamic Administrative Units Support for Azure AD), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft has announced the public preview of dynamic administrative units with Azure Active Directory (Azure AD). The new feature lets organizations configure rules for adding or deleting users and devices in administrative units (AUs).

Azure AD administrative units launched in public preview back in 2020. The feature lets enterprise admins logically divide Azure AD into multiple administrative units. Specifically, an administrative unit is a container that can be used to delegate administrative permissions to a subset of users.

Microsoft Rolls Out Dynamic Administrative Units Support for Azure AD

Previously, IT Admins were able to manage the membership of administrative units in their organization manually. The new dynamic administrative units feature now enables IT Admins to specify a rule to automatically perform the addition or deletion of users and devices. However, this capability is currently not available for groups.

The firm also adds that all members of dynamic administrative units are required to have Azure AD Premium P1 licenses. This means that if a company has 1,000 end-users across all dynamic administrative units, it would need to purchase at least 1,000 Azure AD Premium P1 licenses.

“Using administrative units requires an Azure AD Premium P1 license for each administrative unit administrator, and an Azure AD Free license for each administrative unit member. If you are using dynamic membership rules for administrative units, each administrative unit member requires an Azure AD Premium P1 license,” Microsoft noted on a support page.

How to create dynamic membership rules in Azure AD

According to Microsoft, IT Admins can create rules for dynamic administrative units via Azure portal by following these steps:

  1. Select an administrative unit and click on the Properties tab.
  2. Set the Membership Type to Dynamic User or Dynamic Device and click the Add dynamic query option.
  3. Now, use the rule builder to create the dynamic membership rule and click the Save button.
  4. Finally, click the Save button on the Properties page to save the membership changes to the administrative unit.

Currently, the dynamic administrative units feature only supports one object type (either users or devices) in the same dynamic administrative unit. Microsoft adds that support for both users and devices is coming in future releases. You can head to the support documentation to learn more about dynamic administrative units.

Posted on in category News

How to Download a File using PowerShell

The content below is taken from the original ( How to Download a File using PowerShell), to continue reading please visit the site. Remember to respect the Author & Copyright.

PowerShell can download files from the Internet and your local network to your computer. Learn how to use PowerShell’s Invoke-WebRequest and Start-BitsTransfer cmdlets to download files here.

Welcome to another post on how PowerShell can assist you in your daily job duties and responsibilities. Being able to download files from the Internet and your local network with PowerShell is something I hadn’t really thought a lot about. But, just thinking about the power and scalability of PowerShell intrigues me to no end.

There are so many possibilities around scripts, downloading multiple files at the same time, auto extracting ZIP files, the list goes on and on. If you ever wanted to download the various Windows patching files from the Windows Update Catalog, you could script it if you have the exact URL.

While a bit tedious at first, you could definitely get your groove on after a little bit of tweaking and learning. But let’s first discuss prerequisites.

Prerequisites

They aren’t stringent. You just need PowerShell 5.1 or newer to use the commands in this post. Windows 10 and Windows 11 already include at least version 5.1. Windows Server 2012/R2 comes with version 4.0.

You can also simply download the latest and greatest by downloading PowerShell 7.2.x from this link. And, come to think of it, I’ll use this URL later in the article and show you how to download this file… once you have an appropriate version installed. 🙂

Use PowerShell to download a file from a local network source

Let me start by letting you know I’m utilizing my (Hyper-V) Windows Server 2022 Active Directory lab, again. I’ll be running these commands on my Windows 11 client machine.

First, let’s use the Copy-Item cmdlet to download a file from a local fileserver on my LAN. This command at a minimum just needs a source and destination. I have an ISO in my Downloads folder I need to put up on my G: drive. I’ll create two variables for the source folder and the destination folder.

$source = “c:\users\mreinders\downloads\”
$destination = “\\ws16-fs01-core\shares\folder_01\Extra\”

Then, I’ll run the command and include the -Recurse switch to copy the folder AND the ISO file inside it.

Copy-Item -path $source -destination $destination -Recurse
Using the PowerShell Copy-Item cmdlet to copy an ISO to a fileserver
Using the Copy-Item command to copy an ISO to a fileserver

As you can see, the ISO file was copied to the G: drive.

Use Powershell to download a file from the Internet

Next, let’s work on downloading files from the Internet. We can start with the Invoke-WebRequest cmdlet.

With the Invoke-WebRequest cmdlet 

As I said earlier, I can show you how to download the MSI file for the latest (as of this writing) PowerShell 7.2.2 (x64) version using Invoke-WebRequest. Again, let’s set up some variables first. We can use the general concept of a source variable and destination variable.

$url = “https://github.com/PowerShell/PowerShell/releases/download/v7.2.2/PowerShell-7.2.2-win-x64.msi
$dest = “c:\users\mreinders\downloads\Latest_Powershell.MSI”

Invoke-WebRequest -Uri $url -OutFile $dest
Using Invoke-WebRequest to download the latest PowerShell MSI installer from GitHub
Now, using Invoke-WebRequest to download the latest PowerShell MSI installer from GitHub

This 102 MB file took about 4 or 5 minutes to download, which is quite a bit longer than I would expect. That’s due to the inherent nature of this specific cmdlet. The file is buffered in memory first, then written to disk.

We downloaded the PowerShell MSI file to the Downloads folder
I downloaded the PowerShell MSI file to my Downloads folder

We can get around this inefficiency by using the Background Intelligence Transfer Service (BITS) in Windows. I’ll show you further below how to utilize all your bandwidth.

Cases when downloads require authentication

You will certainly come across files that require authentication before downloading. If this is the case, you can use the -Credential switch on Invoke-WebRequest to handle these downloads.

Let’s say there is a beta or private preview of an upcoming PowerShell version (7.3?) that requires authentication. You can utilize these commands (or create a PowerShell script) to download this hypothetical file.

# Variables
$url = "<a href="https://github.com/PowerShell/PowerShell/releases/download/v7.2.2/PowerShell-7.2.2-win-x64.msi">https://github.com/PowerShell/PowerShell/Preview/download/v7.3.0-Preview3/PowerShell-7.3.0-Preview3-win-x64.msi</a>"
$dest = "c:\users\mreinders\downloads\PowerShell-7.3.0-Preview3.MSI"

# Username and password
$username = 'mreinders'
$password = 'PleaseLetMeIn'

# Convert to a SecureString
$secPassword = ConvertTo-SecureString $password -AsPlainText -Force

# Create a Credential Object
$credObject = New-Object System.Management.Automation.PSCredential ($username, $secPassword)

# Download file
Invoke-WebRequest -Uri $url -OutFile $dest -Credential $credObject

Downloading and extracting .zip files automatically

Let’s see another example of how PowerShell can assist you with automation. We can use some more variables and a COM object to download a .ZIP file and then extract its contents to a location we specify. Let’s do this!

There’s a sample .ZIP file stored up on GitHub. We’ll store that in our $url variable. We’ll create another variable for our temporary ZIP file. Then, we’ll store the path to where the ZIP file will be extracted in a third variable.

$url = “https://ift.tt/S9zC5Dd;
$zipfile = “c:\users\mreinders\downloads\” + $(Split-Path -Path $Url -Leaf)
$extractpath = “c:\users\mreinders\downloads\Unzip”

Invoke-WebRequest -Uri $url -OutFile $zipfile
Using Invoke-WebRequest to download a ZIP file in preparation for extracting
Using Invoke-WebRequest to download a ZIP file in preparation for extracting

Now, let’s use the COM object to extract the ZIP file to our destination folder.

# Create the COM Object instance
$objShell = New-Object -ComObject Shell.Application

# Extract the Files from the ZIP file
$extractedFiles = $ObjShell.NameSpace($zipFile).Items()

# Copy the new extracted files to the destination folder
$ObjShell.NameSpace($extractPath).CopyHere($extractedFiles)
Using a COM object to extract the contents of the ZIP file
Using a COM object to extract the contents of the ZIP file

With the Start-BitsTransfer cmdlet

Now, let’s see if we can speed up file transfers with PowerShell. For that, we’ll utilize the aforementioned Background Intelligence Transfer Service. This is especially helpful as the BITS service lets you resume downloads after network or Internet interruptions.

We can use similar variables and see how long it takes to download our 102 MB PowerShell MSI installer:

$url = “https://github.com/PowerShell/PowerShell/releases/download/v7.2.2/PowerShell-7.2.2-win-x64.msi
$destination = “c:\users\mreinders\downloads\”

Start-BitsTransfer -Source $url -Destination $destination
We downloaded the MSI file in a flash using Background Intelligent Transfer Service (BITS)
We downloaded the MSI file in a flash using Background Intelligent Transfer Service (BITS)!

Ok, that went MUCH faster and finished in about 4 seconds. 🙂 The power of BITS!

Downloading multiple files with Start-BitsTransfer

To close out this post, let me show you how you can download multiple files with the Start-BitsTransfer cmdlet.

There are many websites that store sample data for many training and educational programs. I found one that includes a simple list of files – HTTP://speed.transip.nl.

We’ll parse and store the files in a variable, then start simultaneous downloads of the files asynchronously. Finally, we run the Complete-BitsTransfer command to convert all the TMP files downloaded to their actual filenames.

$url = "http://speed.transip.nl"
$content = Invoke-WebRequest -URI "http://speed.transip.nl"

$randomBinFiles = $content.links | where {$_.innerHTML -like 'random*'} | select href
# Create links for each file entry
$randomBinFiles.foreach( { $_.href = $url + "/" + $_.href })

# Download the files in the background
$randomBinFiles.foreach({
    Start-BitsTransfer ($url + "/" + $_.href) -Asynchronous
})

# Close the transfers and convert from TMP to real file names
Get-BitsTransfer | Complete-BitsTransfer

Conclusion

Well, as long as you have an exact source URL, downloading files with PowerShell is pretty easy. I can see where it would be very handy, especially on GitHub if you don’t have access to Visual Studio to merge or download something to your machine. If you’d like to see any additional examples, please leave a comment below!

Posted on in category News

Use Azure ExpressRoute Private Peering & Azure Virtual WAN to Connect Privately to Microsoft 365

The content below is taken from the original ( Use Azure ExpressRoute Private Peering & Azure Virtual WAN to Connect Privately to Microsoft 365), to continue reading please visit the site. Remember to respect the Author & Copyright.

Many Office 365 customers want to use Azure ExpressRoute to connect their on-premises network to the Microsoft cloud with a private connection. As you may know, though, Microsoft does not recommend using Azure ExpressRoute with Microsoft Peering to connect to Office 365.

There are several reasons for that, let me point out a few of them:

You can get an overview of the different ExpressRoute circuits in the chart below, where “Microsoft Edge” describes the edge routers on the Microsoft side of the ExpressRoute circuit:

The different Azure ExpressRoute circuits

Why you may want to use Azure ExpressRoute to connect to Microsoft 365

There may be various customer scenarios where you need to use Azure ExpressRoute with Microsoft Peering enabled to connect to Microsoft 365 services. Here are two examples:

There is still the option to request Subscription Whitelisting to connect to Microsoft 365 via Azure ExpressRoute, but doing so does not remove the limitations and complexities we’ve highlighted earlier.

However, there’s actually an alternative that enables customers to use Azure ExpressRoute with Microsoft Private Peering all while keeping costs down and enabling redundancy. To accomplish that, we’ll need to use a default behavior from the Microsoft Global Network in combination with some Microsoft services.

Microsoft services traffic is always transported on the Microsoft global network, as explained in the company’s documentation:

Whether connecting from London to Tokyo, or from Washington DC to Los Angeles, network performance is quantified and impacted by things such as latency, jitter, packet loss, and throughput. At Microsoft, we prefer and use direct interconnects as opposed to transit-links, this keeps response traffic symmetric and helps keep hops, peering parties and paths as short and simple as possible.

So, does that mean all traffic when using Microsoft services? Yes, any traffic between data centers, within Microsoft Azure or between Microsoft services such as Virtual Machines, Microsoft 365, Xbox, SQL DBs, Storage, and virtual networks are routed within our global network and never over the public Internet, to ensure optimal performance and integrity.

The technologies required in the solution we mentioned earlier include:

I’ll be explaining this solution in greater detail in the next segment.

Solution architecture

The architecture in this solution is quite simple: You need to deploy an Azure Virtual WAN Hub with Azure Firewall to make it secure and to use it as an Internet access point.

Deploying an Azure Virtual WAN Hub with Azure Firewall

Then, you’ll need to deploy an Azure Virtual WAN ExpressRoute gateway into the virtual WAN connection, connect your ExpressRoute Local to the gateway, and secure your Internet for that ExpressRoute. Doing so will announce a default route (0.0.0.0/0) to your on-premises infrastructure.

Deploying an Azure Virtual WAN ExpressRoute gateway into the virtual WAN connection

On your on-premises infrastructure, you can now set a static route to point to the gateway. You can also leverage newer software-defined WAN (SDWAN) or Firewall devices to use a service-based routing and only send traffic for Microsoft 365 services to our new Azure Secure Virtual WAN Hub.

Installing Azure Firewall in a Virtual WAN hub

The diagram below shows what this architecture looks like:

The solution architecture

We still have to deal with the fact that an ExpressRoute circuit is not georedundant as it is only deployed in one Edge co-location. To establish the necessary redundancy, you’ll need to build additional circuits.

Implementing redundancy and global deployment

To implement a highly-available architecture and improve latency for your users, you should distribute additional hubs. I would suggest creating ExpressRoute circuits in different local Azure regions such as Germany Frankfurt and West Europe Amsterdam. Microsoft has a dedicated page where you can find all possible Azure locations, and the company also has detailed documentation explaining how to implement redundancy for Azure ExpressRoute.

You have two options from that point on: The first one is to create two separate circuits connected to two separate Azure Virtual WAN hubs, as shown below.

Two separate circuits connected to two separate Virtual WAN hubs

Another option is to interconnect both Virtual WAN hubs, as shown in the schema below:

We can also implementing redundancy by interconnecting both Virtual WAN hubs

In the case of inter-hub connectivity, you need to disable branch-to-branch connectivity within the virtual WAN Hub properties. Branch-to-branch is currently not supported when using ExpressRoute Local, so you need to disable it on the Virtual WAN Hub level.

We're disabling Branch-to-branch connectivity as it's not supported in the architecture we're using

With that architecture, you will get a private, redundant, and high performant connection to Microsoft 365 Services.

I also want to make you aware that Microsoft announced additional security capabilities integrating network virtual appliances into virtual WANs. You can watch the announcement from Microsoft for that solution on YouTube.

Cost Calculation

In this section, I will provide a short cost calculation for this solution. Please be aware that there are two parts of the ExpressRoute Local Service to take into account:

I can only provide you with the Microsoft part of the calculation, as the data center and network service provider costs can vary a lot.

The redundant solution includes the following components:

Service type Description Estimated monthly cost Estimated upfront cost
Virtual WAN West Europe, Secured Virtual WAN Hub with Azure Firewall; 730 Deployment hours, 10 GB of data processed; Connections $1.806,74 $0,00
Virtual WAN Germany West Central, Secured Virtual WAN Hub with Azure Firewall; 730 Deployment hours, 10 GB of data processed; Connections $1.806,74 $0,00
Azure ExpressRoute ExpressRoute, Zone 1, Local; 1 Gbps Circuit x 1 circuit $1.200,00 $0,00
Azure ExpressRoute ExpressRoute, Zone 1, Local; 1 Gbps Circuit x 1 circuit $1.200,00 $0,00
Support Included $0,00 $0,00
Total $6.013,48 $0,00

You can check out my cost calculation on Microsoft’s Azure website, feel free to use it as an example for your own calculations.

Conclusion

As you can see, it is still possible to use Azure ExpressRoute to connect privately to Microsoft 365 and other Microsoft Cloud services, but it comes at a price. If you require additional security and don’t want a solution that allows routing through the Internet or other carrier networks, you could leverage that solution.

Posted on in category News

IT provider Stone Group ramps up fight against electronic waste

The content below is taken from the original ( IT provider Stone Group ramps up fight against electronic waste), to continue reading please visit the site. Remember to respect the Author & Copyright.

IT provider Stone Group ramps up fight against electronic waste

Stone Group has revealed that over half a million items of unwanted tech hardware have been saved from landfill due to its app.

The circular IT provider says its Stone 360 app has been downloaded by 11,000 businesses and helps organisations arrange the responsible disposal of unwanted IT assets at the touch of a screen.

Any used hardware including monitors, laptops, desktops, printers, and servers that cannot be refurbished are fully broken down to their core components and recycled.

From April, Stone’s IT asset disposal (ITAD) facility will operate 24×7 to keep up with demand for its recycling services.

Stone Group will also soon be launching the second iteration of the Stone 360 app and said the latest version will help organisations “meet important regulations on electronic waste disposal”.

It will help users classify any items that contain harmful substances and identify those that can be successfully refurbished.

On release, the new version of the Stone 360 app will be available for free download on both iOS and Android devices.

Craig Campion, director of ITAD sales at Stone Group said: “We all need to do more to protect our planet, but unfortunately more and more electronic waste is being created every day and recycling levels are just not keeping pace.

“As a provider of IT to the public and private sector, we are committed to playing a significant role in helping organisations dispose of their end-of-life IT in the right way. The Stone 360 app has been a revolutionary in helping our customers increase their recycling efforts by enabling quick and easy collections and responsible disposal of unwanted items.   

“We’ve recently seen our multi-award-winning app reach over 3,000 businesses with a workforce of over four million, the majority of whom will use some form of IT hardware. We are aiming to at least double the reach of the Stone 360 app this year and we anticipate that the addition of our new functionality to help organisations comply with Government legislation on IT disposal will drive this.”

 

Posted on in category News

Vaccinating a nation: Vaccination app delivery in 30 days with Cloud Spanner

The content below is taken from the original ( Vaccinating a nation: Vaccination app delivery in 30 days with Cloud Spanner), to continue reading please visit the site. Remember to respect the Author & Copyright.

As the most specialized provider of cloud computing solutions in Poland, Chmura Krajowa (OChK) works to accelerate the digital transformation of Polish businesses and public institutions. In November 2020, the Polish government handed us a formidable challenge: starting from scratch and within 30 days, design and deploy an application to help vaccinate every citizen in Poland against COVID-19. Using Google Cloud products like Cloud Spanner to power the application, we met our goal, and the citizens of Poland are now better protected from the coronavirus pandemic. 

Defining the challenge

We were under considerable pressure to deliver an application that worked as expected and ran without errors or downtime, all with citizens, the government, and the media watching. 

Because the business requirements of the vaccination programme kept evolving in response to the changing situation, the system had to be modified with particular agility. The time pressure was exceptionally high – changes and new functionalities were implemented within hours.

The solution required three systems in one platform:

  • One for 100,000 medical workers at 9,000 vaccination sites to run their own site-specific calendars and manage vaccination schedules. This was a complex undertaking, as vaccinations often required multiple doses,and people could choose different vaccines depending on their age and actual legislation at the given moment. 

  • One for call centers where live operators could schedule appointments for callers. At the peak, about 2,000 operators worked 24/7 in shifts to field calls from across Poland.

  • One for 36 million eligible users to go online via the web, a mobile device or an SMS gateway and Interactive voice response (IVR) to schedule their own appointments using a country-wide authentication scheme and to access follow-up care and resources. 

The scalability’s requirements were unpredictable for the third system for 36 million eligible users. The government could plan for the number and behavior of trained medical workers and call center operators, but not for the number and behavior of citizens scheduling their own appointments at the peak of a worldwide pandemic. This became more challenging as the vaccine roll-out progressed and more people became eligible, including those with greater familiarity with the internet and technology. On days when eligibility widened, huge numbers of citizens wanted to be among the first to get their vaccinations from a limited set of available calendar slots. 

Partnering with Google Cloud

To succeed within such a narrow timeframe, we needed to collaborate with a capable and experienced cloud provider. Given our experience working with Google on earlier projects for the Ministry of Health, Google Cloud was the clear and easy choice as it allowed us to focus on the project details, while being able to entrust virtually all infrastructure and scalability needs to Google.

Using Google-native Go programming language, we employed a wide range of Google Cloud solutions in building the application, including Google Kubernetes Engine (GKE) for backend services, like scheduling vaccination site staff, as well as external services and web requests.

Spanner played a key role in the application’s architecture, meeting the project’s most critical needs:

  • High availability to avoid downtime due to maintenance.

  • Strong consistency due to the transactional nature of the reservation system so that everyone has the same view of the data at a given point-in-time.

  • Horizontal scalability to meet the demands of a program designed to be used by millions of citizens.

Of these three needs, scalability was the most important. We felt secure knowing that during peak traffic, we would not have to worry about the database scalability. 

Architecting the solution

The Spanner database schema consists of 30 tables, some of which are particularly crucial, like the table used by every vaccination site to define its calendar and specify appointment slots.The Ministry of Health workers at the vaccination sites are responsible for generating slots for a specified time span, using the back office system we provided them. The amount of data in  the table grew quickly to hundreds of millions of rows with thousands of rows per second. 

Integrating the suite of Google Cloud data solutions

Other Google products in the application stack include Memorystore for Redis for caching, storing user sessions and rate limiting. While it’s unfeasible to cache the available slots because they change so quickly, it’s easier to cache empty search results for given criteria, so if a user is looking for a certain location where there are no slots at a given time, this information can be temporarily cached to speed up the response. 

The stack also uses Pub/Sub for asynchronous messaging and Firestore for maintaining application configuration. For reporting, we run a Dataflow job that mirrors the Spanner database into BigQuery, and then another job mirroring data into the Ministry of Health’s internal data warehouse .  

For day-to-day internal reporting, we use Data Studio connected directly to BigQuery. The Data Studio reports are also used by external parties responsible for creating vaccination strategies to help manage the availability of vaccines and allocate resources.

Meeting the challenge 

We definitely see this project as a success. We deployed and managed it in record time without any major errors or outages, our client is satisfied, and most Polish citizens have now been vaccinated using our system either directly or indirectly. Projects don’t get any more exciting or challenging, and this one directly benefited the public health of our nation, which is extremely important to us. With Google Cloud products like Spanner, GKE, Pub/Sub, Dataflow, and BigQuery underpinning this critical application, we were able to deliver on our promise.


Learn more about Chmura Krajowa (OChK) and Cloud Spanner.

Related Article

COLOPL, Minna Bank and 7-Eleven Japan use Cloud Spanner to solve digital transformation challenges

COLOPL, Minna Bank and 7-Eleven Japan use Cloud Spanner to solve their scalability, performance and digital transformation challenges.

Read Article

Posted on in category News

AWS Partner Network (APN) – 10 Years and Going Strong

The content below is taken from the original ( AWS Partner Network (APN) – 10 Years and Going Strong), to continue reading please visit the site. Remember to respect the Author & Copyright.

AWS 10 Years with animated flamesTen years ago we launched AWS Partner Network (APN) in beta form for our partners and our customers. In his post for the beta launch, my then-colleague Jinesh Varia noted that:

Partners are an integral part of the AWS ecosystem as they enable customers and help them scale their business globally. Some of our greatest wins, particularly with enterprises, have been influenced by our Partners.

A decade later, as our customers work toward digital transformation, their needs are becoming more complex. As part of their transformation they are looking for innovation, differentiating solutions, and routinely ask us to refer them to partners with the right skills and the specialized capabilities that will help them to make the best of use AWS services.

The partners, in turn, are stepping up to the challenge and driving innovation on behalf of their customers in ways that transform multiple industries. This includes migration of workloads, modernization of existing code & architectures, and the development of cloud-native applications.

Thank You, Partners
AWS Partners all around the world are doing amazing work! Integrators like Presidio in the US, NEC in Japan, Versent in Australia, T-Systems International in Germany, and Compasso UOL in Latin America are delivering some exemplary transformations on AWS. On the product side, companies like Megazone Cloud (Asia/Pacific) are partnering with global ISVs such as Databricks, Datadog, and New Relic to help them go to market. Many other ISV Partners are working to reinvent their offerings in order to take advantage of specific AWS services and features. The list of such partners is long, and includes Infor, VTEX, and Iron Mountain, to name a few.

In 2021, AWS and our partners worked together to address hundreds of thousands of customer opportunities. Partners like Snowflake, logz.io, and Confluent have told us that AWS Partner program such as ISV Accelerate and AWS Global Startup Program are having a measurable impact on their businesses.

These are just a few examples (we have many more success stories), but the overall trend should be pretty clear — transformation is essential, and AWS Partners are ready, willing, and able to make it happen.

As part of our celebration of this important anniversary, the APN Blog will be sharing a series of success stories that focus on partner-driven customer transformation!

A Decade of Partner-Driven Innovation and Evolution
We launched APN in 2012 with a few hundred partners. Today, AWS customers can choose offerings from more than 100,000 partners in more than 150 countries.

A lot of this growth can be traced back to our first Leadership Principle, Customer Obsession. Most of our services and major features have their origins in customer requests and APN is no different: we build programs that are designed to meet specific, expressed needs of our customers. Today, we continue to seek and listen to partner feedback, use that feedback to innovate and to experiment, and to get it to market as quickly as possible.

Let’s take a quick trip through history and review some of the most interesting APN milestones of the last decade:

In 2012, we first announced the partner type (Consulting and Technology) model when APN came to life. With each partner type, partners could qualify for one of the three tiers (Select, Advanced, and Premier) and achieve benefits based on their tier.

In 2013, AWS Partners told us they wanted to stand out in the industry. To allow partners to differentiate their offerings to customers and show expertise in building solutions, we introduced the first two of what is now a very long list of competencies.

In 2014, we launched the AWS Managed Service Provider Program to help customers find partners who can help with migration to the AWS cloud, along with the AWS SaaS Factory program to support partners looking to build and accelerate delivery of SaaS (Software as a Service) solutions on behalf of their customers. We also launched the APN Blog channel to bring partner success stories with AWS and customers to life. Today, the APN Blog is one of the most popular blogs at AWS.

Next in 2016, customers started to ask us where to go when looking for a partner that can help design, migrate, manage, and optimize their workloads on AWS, or for partner-built tools that can help them achieve their goals. To help them more easily find the right partner and solution for their specific business needs, we launched the AWS Partner Solutions Finder, a new website where customers could search for, discover, and connect with AWS Partners.

In 2017, to allow partners to showcase their earned AWS designations to customers, we introduced the Badge Manager. The dynamic tool allows partners to build customized AWS Partner branded badges to highlight their success with AWS to customers.

In 2018, we launched several new programs and features to better support our partners gain AWS expertise and promote their offerings to customers including AWS Device Qualification program, AWS Well-Architected Partner Program, and several competencies.

In 2019, for mid-to-late stage startups seeking support with product development, go-to-market and co-sell, we launched the AWS Global Startup program. We also launched the AWS Service Ready Program to help customers find validated partner products that work with AWS services.

Next, in 2020 to help organizations co-sell, drive new business and accelerate sales cycles we launched the AWS ISV Accelerate program.

In 2021 our partners told us that they needed more (and faster) ways to work with AWS so that they could meet the ever-growing needs of their customers. We launched AWS Partner Paths in order to accelerate partner engagement with AWS.

Partner Paths replace technology and consulting partner type models—evolving to an offering type model. We now offer five Partner Paths—Software Path, Hardware Path, Training Path, Distribution Path, and Services Path—which represents consulting, professional, managed, or value-add resale services. This new framework provides a curated journey through partner resources, benefits, and programs.

Looking Ahead
As I mentioned earlier, Customer Obsession is central to everything that we do at AWS. We see partners as our customers, and we continue to obsess over ways to make it easier and more efficient for them to work with us. For example, we continue to focus on partner specialization and have developed a deep understanding of the ways that our customers find it to be of value.

Our goal is to empower partners with tools that make it easy for them to navigate through our selection of enablement resources, benefits, and programs and find those that help them to showcase their customer expertise and to get-to-market with AWS faster than ever. The new AWS Partner Central (login required) and AWS Partner Marketing Central experiences that we launched earlier this year are part of this focus.

To wrap up, I would like to once again thank our partner community for all of their support and feedback. We will continue to listen, learn, innovate, and work together with you to invent the future!

Jeff;

Posted on in category News

Microsoft Simplifies IT Monitoring with New Azure Managed Grafana Service

The content below is taken from the original ( Microsoft Simplifies IT Monitoring with New Azure Managed Grafana Service), to continue reading please visit the site. Remember to respect the Author & Copyright.

Last year, Microsoft unveiled its plans to create a fully-managed version of Grafana that runs natively on its Azure cloud platform. Now, the Redmond giant has announced that the new Azure Managed Grafana service is now available in public preview.

Grafana is basically an open-source platform that enables organizations to visualize multiple types of reliability data in a single dashboard. It provides graphs, charts, and alerts that simplify the task of detecting technical issues in business environments. Previously, enterprise customers used the self-managed open-source product to deploy Grafana on Azure.

The new Azure Managed Grafana enables organizations to access the platform without managing the underlying infrastructure. It helps IT Admins detect technical issues across on-premises and Azure environments, as well as other cloud platforms.

“Grafana helps you bring together metrics, logs and traces into a single user interface. With its extensive support for data sources and graphing capabilities, you can view and analyze your application and infrastructure telemetry data in real-time,” Microsoft explained in a support document.

Microsoft Simplifies IT Monitoring with New Azure Managed Grafana Service

Azure Monitor gets new Grafana integrations

In addition to the new service, Microsoft has announced some new Grafana integrations with Azure Monitor. It is now possible to quickly pin Azure Monitor visualizations from Azure Portal to new and existing Grafana dashboards.

Moreover, the new Azure Grafana service has built-in support for Azure Data Explorer, a real-time data analytics and data exploration service for large volumes of streaming data. With this service, customers can view the telemetry data of connected devices right from the Grafana dashboard.

Microsoft has introduced new “out-of-the-box” Grafana dashboards that make it easier for customers to visualize data from Azure Monitor. These dashboards come with several built-in features such as Azure Monitor insights, Azure alerts, and much more. This feature should eliminate the need to create data visualizations from scratch.

Lastly, Microsoft highlights that the new service also integrates with Azure Active Directory. It allows organizations to easily manage user permissions and access control via Azure Active Directory identities. This integration should make it easier for IT Admins to secure their Azure Managed Grafana deployments.

Microsoft is offering a free 30 days trial of its Azure Managed Grafana service, and you can find more details on the official website.

Posted on in category News

Departing Space Force chief architect likens Pentagon’s tech acquisition to a BSoD

The content below is taken from the original ( Departing Space Force chief architect likens Pentagon’s tech acquisition to a BSoD), to continue reading please visit the site. Remember to respect the Author & Copyright.

US military must ‘ride the wave of commercial innovation … or risk drowning under its own weight’

The outgoing first chief architect officer of the US Air and Space Force urged the Pentagon to lay off wasting time building everything itself, and use commercial kit if available and appropriate to upgrade its technological capabilities quickly.…

Posted on in category News

Microsoft Authenticator now lets you generate strong passwords

The content below is taken from the original ( Microsoft Authenticator now lets you generate strong passwords), to continue reading please visit the site. Remember to respect the Author & Copyright.

Posted on in category News

New: ExtPassword! v1.00

The content below is taken from the original ( New: ExtPassword! v1.00), to continue reading please visit the site. Remember to respect the Author & Copyright.

ExtPassword! is tool for Windows that allows you to recover passwords stored on external drive plugged to your computer.
ExtPassword! can decrypt and extract multiple types of passwords and essential information, including passwords of common Web browsers, passwords of common email software, dialup/VPN passwords, wireless network keys, Windows network credentials, Windows product key, Windows security questions.
This tool might be useful if you have a disk with Windows operating system that cannot boot anymore, but most files on this hard drive are still accessible and you need to extract your passwords from it.

Posted on in category News

Microsoft expands cybersecurity skills training to 23 new countries

The content below is taken from the original ( Microsoft expands cybersecurity skills training to 23 new countries), to continue reading please visit the site. Remember to respect the Author & Copyright.

Posted on in category News

How to Install Windows Server 2022 Step by Step

The content below is taken from the original ( How to Install Windows Server 2022 Step by Step), to continue reading please visit the site. Remember to respect the Author & Copyright.

What Is Windows Server 2022?

Windows Server 2022 is Microsoft’s latest version of Windows Server in the Long-Term Servicing Channel (LTSC). They release new versions on this channel about every three years or so.

The most recent version before Windows Server 2022 was Windows Server 2019. These releases receive ten full years of technical support from Microsoft via Mainstream support (through 10/13/2026) and Extended support (through 10/14/2031).

Over the past few years, the Windows Server team released Windows Server Core releases with breaking new features – these were from the Semi-Annual Channel (SAC). The last release was Windows Server, version 20H2.

These releases are supported for 18 months. So, after August 9th of this year (2022), Microsoft will no longer offer any support for the Semi-Annual Channel of Windows Server.

What are the new features in Windows Server 2022?

Windows Server 2022 is built on the strong foundation of Windows Server 2019 and brings several innovations around three pillars: security, Azure hybrid integration and management, and application platform enhancements. Let’s go through some of the more substantial areas of improvement and innovation.

Secured-core server

A Secured-core server uses firmware, hardware, and driver capabilities to enable advanced security features for Windows Server. The overall design goal is to provide additional security protections that are useful against sophisticated and coordinated attacks.

Transport: HTTPS and TLS 1.3 enabled by default

Secure connections are at the heart of today’s systems on your network and the Internet. Transport Layer Security (TLS) 1.3 is the latest and most secure version of the Internet’s most deployed security protocol.

With HTTPS and TLS 1.3 enabled by default, protecting the data of clients connecting to the server is more streamlined and inherently automatic. To learn more about verifying your applications and services are ready for TLS 1.3, please visit Microsoft’s Security Blog.

Azure Arc enabled Windows Servers

Azure Arc enabled servers with Windows Server 2022 bring on-premises and multi-cloud Windows servers to Azure. The management experience is designed to be consistent whether you’re managing Azure virtual machines or hybrid Windows Server 2022 in your datacenters.

Application platform

There are many platform improvements for Windows Containers. The most impactful enhancements include application compatibility and the Windows Container experience with Kubernetes.

A welcome optimization effort was undertaken and now affords IT Pros a 30% faster startup time and better performance thanks to Microsoft engineers reducing the footprint by up to 40%.

You can now run applications that depend on Azure Active Directory with group Managed Services Accounts (gMSA) without domain joining the container host. In addition, Windows Containers now support Microsoft Distributed Transaction Control (MSDTC) and Microsoft Message Queuing (MSMQ).

Kubernetes also receives some welcome enhancements, including support for host-process containers for node configuration, IPv6, and consistent network policy implementation with Calico.

Microsoft Edge

For the first time in a long time, Internet Explorer is being replaced with Microsoft Edge as the default browser in Windows Server! However, the Internet Explorer application is still included for legacy compatibility.

Prerequisites

Hardware requirements

Because of the highly diverse scope of potential deployments of Windows Server, these guidelines should be considered when planning for your installations and scenarios for Windows Server 2022. These are most pertinent for installing on a physical host or a physical server. These generally include both the Server Core and Server with Desktop Experience installation options.

  1. Processor
    • 1.4 GHz 64-bit processor
    • Compatible with x64 instruction set
    • Supports NX and DEP
    • Supports CMPXCHG16b, LAHF/SAHF, and PrefetchW
    • Supports Second Level Address Translation (EPT or NPT)
  2. Memory (RAM)
    • 512 MB (2 GB for Server with Desktop Experience)
    • ECC (Error Correcting Code) or similar technology for physical deployments
  3. Storage Controller and Disk Space
    • PCI Express architecture specification storage controller
    • 32 GB disk space (minimum for Server Core and IIS Role installed)
  4. Network requirements (adapter)
    • Ethernet adapter capable of at least 1 Gbps throughput
    • PCI Express architecture
  5. Other requirements
    • DVD Drive (if you intend to install Windows using DVD media)
    • UEFI 2.3.1c-based system and firmware to support Secure Boot
    • Trusted Platform Module (TPM)
    • Graphics device and monitor capable of Super VGA (1024×768) or higher resolution
    • Keyboard, Mouse, Internet Access

Installation options

There are two installation options for Windows Server 2022 – Server Core installation (recommended by Microsoft) and Server with Desktop Experience. Let’s go through the basics and pros/cons of each.

Server with Desktop Experience vs. Server Core

For the past 30 years or so, versions of Windows Server were installed with a GUI (Graphical User Interface), now called the Desktop Experience. Starting with Windows Server 2008, Microsoft added a new ‘Server Core’ option that removes the GUI/Desktop environment from the installation and saves disk space, memory usage, security attack footprint, among other enhancements.

Their design goals were to give you a leaner Windows Server footprint, and have you manage it remotely. Efficiency and Security are the primary pros.

The main con to the Server Core option is the ease of manageability, but only at first. You can’t use Remote Desktop Protocol (RDP) to login to the server, install a server role or feature, or run Windows Update from the Control Panel.

However, as you migrate your ‘server management’ methodology to how Microsoft recommends it, you’ll probably find it actually works out better in terms of efficiency and ease of use. You can use Windows Admin Center to install roles and features, check how much disk space is free on your C: drive, and even check for Updates, install them, and schedule the reboot!

You can check out our separate guide on how to install Windows Server 2022 Core.

How to get installation media

There are several methods to download Windows Server and obtain the ISO file/installation media for Windows Server 2022:

Installing Windows Server 2022

OK, enough of my yakking… let’s boogie!

I wrote a series of articles in the summer of 2021 around upgrading my Hyper-V lab of Active Directory to Windows Server 2022. For this article, I have created a new Hyper-V virtual machine and will use this Windows Server VM to install Windows Server 2022.

After configuring the VM with my installation ISO, I started the VM and pressed a key to boot from the ISO.

Windows Server 2022 Setup - Initial Step
Windows Server 2022 Setup – Initial Step

Here, I click Next. And then Install now. (Notice the title bar – instead of saying Windows Server 2016 or Windows Server 2019, it now says ‘Microsoft Server Operating System Setup’).

Click 'Install now' to begin
Click ‘Install now’ to begin

Here, you can optionally enter your product key. This could be a retail key, a Volume License Key (VLK) from your organization, or an evaluation key. Note – You don’t have to enter one now… just click ‘I don’t have a product key’ if you want to handle this post-setup.

Activate Windows Server now or after the installation process is done
Activate Windows now… or later

Here, depending on what installation ISO you obtained, you’ll choose your product edition and the installation type. Here, I will choose ‘Windows Server 2022 Datacenter (Desktop Experience).

Choose what edition and installation option you want for the OS here
Choose what edition and installation option for Windows Server here

Check the box to accept the license terms and click Next.

You should read ALL the license terms before clicking Next
You should read ALL the license terms before clicking Next – 😉

Next, for what type of installation to choose, we’ll go with ‘Custom‘ as we are doing a clean installation with no existing operating system.

Choose  between the Upgrade or Custom installation options
Choose Upgrade or Custom to install Windows Server 2022

You’ll have a single, unallocated partition when doing a clean install, so click Next.

Choosing where to install the OS
Choosing where to install Windows Server 2022

Now, Setup will copy all the installation files, unpack the image, detect devices, and bring you to your first post-setup task.

Microsoft Server Operating System Setup is proceeding
Microsoft Server Operating System Setup is proceeding…

Post-setup tasks

The first step is to create the local Administrator password. Out of the box, this will require a complex password, so make sure it’s difficult enough to crack.

Creating a complex password for the local 'Administrator' account
Creating a complex password for the local ‘Administrator’ account

Then, press Ctrl-Alt-Del, or your virtual equivalent, and log in.

The Windows Server 2022 Lock Screen
The Windows Server 2022 Lock Screen
First screen after logging into Windows Server 2022
The first screen after logging into Windows Server 2022

You will notice that Server Manager launches automatically and reminds you to use Windows Admin Center to manage the server.

See? Microsoft recommends you use the Server Core option so you’re not logging into your servers needlessly. If you won’t do that, though, they will at least ask you to use Windows Admin Center to manage the server remotely. Yes, they are persistent…

Quick Tip – If you don’t want Server Manager to launch upon login, click the Manage menu in the upper right corner, then click Server Manager Properties. Next, check the box ‘Do not start Server Manager automatically at logon‘.

How to prevent Server Manager from loading every time you log on to the server
How to prevent Server Manager from loading every time you log on to the server

How to configure your network

There are quite a few post-setup tasks to perform, but because our environments are so diverse, it’s best to stick to the core tasks that are the most crucial to get your server on your network as soon as possible.

First, the network: It is highly likely that you will be setting a static IP address for your server. To perform this setup, in the Server Manager, click the ‘Configure this local server‘ link at the top.

The 'Local Server' Dashboard in Server Manager
The ‘Local Server’ dashboard in Server Manager

Then, click the hyperlink next to your Ethernet adapter (You may very well have more than one).

Network Connections (Control Panel)
Network Connections (Control Panel)

Right-click on the adapter and click Properties.

Now, click on ‘Internet Protocol Version 4 (TCP/IPv4)’ and click Properties.

Defining a static IP Address for your server's (first) Ethernet adapter
Defining a static IP Address for your server’s (first) Ethernet adapter

Go ahead and click ‘Use the following IP address‘ and enter your pertinent information. You and/or your network team or virtualization team should have the appropriate information to enter here.

Click OK and be sure to test and validate network and Internet access (if that is your design intent) before proceeding.

How to install the latest Windows Server updates

Before putting your new server into testing and/or production, you’ll want to run Windows Update to get the latest security fixes, bug fixes, and any new features.

To do this, back in Server Manager, you can click the hyperlinks in the upper right corner that describe ‘Last installed updates‘, ‘Windows Update‘. This will open the Settings -> Windows Update menu.

How to install the latest Windows Server updates
Start -> Settings -> Update & Security -> Windows Update

Click ‘Check for updates’ and you’ll get the latest cumulative updates for Windows Server, .NET Framework, etc.

After Windows Update prompts you to reboot, go ahead and reboot. Then, for good measure, open Windows Update again, click Check for Updates, just in case there are any more missed during the first run.

Windows Update in the process of downloading and installing the latest patches and updates
Windows Update – In process of downloading and installing the latest patches and updates…

Conclusion

You should now be educated and ready to start installing Windows Server 2022 in a test environment to begin validating your applications and services and see how they work on Windows Server 2022.

If you need more information about Windows Server 2022, Microsoft provides a lot of helpful and solid resources about the latest version of its Server OS. I recommend you to start on the Windows Server Documentation website.

Posted on in category News

Pixelating Text Not a Good Idea

The content below is taken from the original ( Pixelating Text Not a Good Idea), to continue reading please visit the site. Remember to respect the Author & Copyright.

People have gotten much savvier about computer security in the last decade or so. Most people know that sending a document with sensitive information in it is a no-no, so many people try to redact documents with varying levels of success. A common strategy is to replace text with a black box, but you sometimes see sophisticated users pixelate part of an image or document they want to keep private. If you do this for text, be careful. It is possible to unredact pixelated images through software.

It appears that the algorithm is pretty straightforward. It simply guesses letters, pixelates them, and matches the result. You do have to estimate the size of the pixelation, but that’s usually not very hard to do. The code is built using TypeScript and while the process does require a little manual preparation, there’s nothing that seems very difficult or that couldn’t be automated if you were sufficiently motivated.

You don’t see it as often as you used to, but there have been a slew of legal and government scandals where someone redacted a document by putting a black box over a PDF so it was hidden when printed but the text was still in the document. Older wordprocessors often didn’t really delete text, either, if you knew how to look at the files. The Facebook valuation comes to mind. Not to mention that the National Legal and Policy Center was stung with poor redaction techniques.

Posted on in category News

Generally available: Direct enterprise agreement on Azure Cost Management and Billing

The content below is taken from the original ( Generally available: Direct enterprise agreement on Azure Cost Management and Billing), to continue reading please visit the site. Remember to respect the Author & Copyright.

Manage your enrollment hierarchy, view account usage, and monitor costs directly from the Azure Cost Management and Billing menu on the Azure Portal (for direct enterprise agreement customers on commercial cloud).

Posted on in category News

AzurePrice.net updates and a small gift

The content below is taken from the original ( AzurePrice.net updates and a small gift), to continue reading please visit the site. Remember to respect the Author & Copyright.

AzurePrice.net updates and a small gift

Hi, I’m Victor from https://azureprice.net. We’ve added some new features to the tool, and I want to highlight them in this post quickly cause these things are great. There is a small gift at the end of this email that I hope will make you glad.

Interesting Fact: Based on answers in our questionnaire, azureprice.net helped 31,6% of users save from $100 to $1000 per year, and 21.1% responded from $1000 to $10 000 per year (wow!).

Alternative and Similar VMs

Using some light flavor of machine learning, we try to find the most similar VMs based on performance, CPUs, RAM, Numa nodes, etc. It works excellent for mid and large VMs, and optimization could be easily around 15-30%.

https://preview.redd.it/w1sa7uj9lng81.png?width=985&format=png&auto=webp&s=f087e9e1e54bfbd60ac8513c70af04870dec82fa

VM name explanation

Are you lost when you see a VM name like that Standard_E32-16ads_v5? We added a quick explainer. Just hover your mouse on the VM name, and you will see something like that:

https://preview.redd.it/nj0nbbwalng81.png?width=630&format=png&auto=webp&s=8f8f8ef8bbb374334cffee63ec735fd1286580fa

Full VM overview

Want to see all information of a particular VM in our place, including specifications, pricing, alternative recommendations, etc. Here you go:

https://i.redd.it/479o6fublng81.gif

As I promised here is a small gift for lovely reddit community where we get a valuable feedbacks:

code:FEEDBACKER10

submitted by /u/Gaploid to r/AZURE
[link] [comments]

Posted on in category News

Accelerate vCenter Migration using AWS Migration Service Agentless Migration

The content below is taken from the original ( Accelerate vCenter Migration using AWS Migration Service Agentless Migration), to continue reading please visit the site. Remember to respect the Author & Copyright.

Organizations often decide to move their applications from on-premises environments to the cloud with little to no architecture changes. This migration strategy is advantageous for large-scale applications to satisfy specific business goals, such as launching a product in an accelerated timeline or exiting an on-premises data center. Using a rehost migration strategy lets customers achieve the cloud benefits, such as reducing cost, increasing flexibility, scalability, agility, and high availability, as well as simultaneously reducing migration risk due to a tight timeline.

AWS Application Migration Service (MGN) is the primary migration service recommended for rehost (lift-and-shift) migrations to AWS Cloud. AWS MGN supports both agent-based and agentless snapshot approaches to replicate servers from on-premises to AWS. In this post, we will explain the differences between the two methods and provide guidance for when to choose each one. Furthermore, we will walk through an example that demonstrates how to migrate a source environment hosted on vCenter to AWS using the Agentless snapshot based replication that has been recently added to AWS MGN.

Let’s start by discussing the agent-based replication. First, it supports block level replication from virtually any source environment. The source environment for the replication can be any supporting Operating System (OS) that is on physical servers, virtual servers that are on-premises, or virtual machines (VMs) on other cloud providers such as Azure or GCP. Second, the agent-based replication supports Continuous Data Protection (CDP). CDP keeps the source environment in sync with the replication server in near real-time after the initial replication has finished. This provides a short cutover window and makes the Recovery Point Objective (RPO) provided by AWS MGN in the sub-second range for most cases.

To receive these benefits, we recommend that customers use the agent-based replication when possible. However, organizational and security policies, or limited server access, may prevent installation of the AWS replication agent on every server. Additionally, although automation orchestrations are built on top of AWS MGN to streamline agent installation and target environment setup, learning to use these solutions and integrating them with the organization’s platform might introduce additional tasks that customer want to avoid.

If any of those scenarios applies, then the AWS MGN Agentless approach may be another solution for the migration. For the Agentless approach, you must consider the following:

Now that both migration methods have been discussed, let’s walk through an example of how to use the agentless replication to replicate a vCenter environment to AWS.

Solution overview

The following diagram depicts the AWS MGN agentless replication architecture.

To demonstrate this setup, I use an ESXi source environment that has a vCenter appliance v6.7 running on an m5.metal EC2 instance in the eu-west-1 Region. I created 4 VMs (Ubuntu 18.04, Centos8, Windows 2016, and Windows 2019). After making sure that the connectivity requirements for this replication are met (more about that later), I install the AWS MGN Agentless client, which will start discovering my VMs and replicate them to my destination Region on AWS. Next, I will walk you through the details.

AWS MGN Architecture showing different on-prem and AWS elements and the interactions between them

Figure 1: Agentless Architecture

1 – Setting up the destination environment on AWS

Before I’m able to install the MGN vCenter Appliance into my source environment I need to complete the following initial setup in the AWS Region where I will replicate the vCenter environment to.

  1. 1) Create an AWS Identity and Access Management (IAM) user for AWS MGN vCenter client and assign the required permissions. After creating the user, note the access key and the secret access key. For more details, check Generating vCenter IAM Credentials.
  1. 2) Create a Virtual Private Network (VPC) with two subnets. We will use the first subnet for AWS MGN staging area. The second subnet will be the destination subnet to which we will replicate the source environment servers. For more details on preparing MGN networking setup check Networking Setting Preparations.
  1. 3) Initialize AWS MGN: This process is required when you use AWS MGN for the first time. During initialization you will be directed to create Replication Settings template. This process also creates the IAM Roles needed for the service to work. For more details check Initialize Application Migration Service.

2 – Setting up the source environment on vCenter

I chose to download and install the MGN vCenter Appliance on CentOS8 VM in my source environment. Before I start the installation, I make sure the following networking requirements are satisfied on the VM. For more details on setting up networking for vCenter refer to this link.

Using telnet to verify connectivity to vCenter endpoint

Figure 2: Telnet vcsa

Using telnet to verify connectivity to MGN endpoint in eu-west-1 region

Figure 3: Telnet MGN

Once the networking configuration has been verified, the next step is to download and install the AWS MGN vCenter Appliance into the CentOS VM in your source environment.

  1. 1) The MGN vCenter Appliance installer requires Python3, so before I start the download, I install I connect to CentOS VM and install Python3
    sudo yum install python3 -y

  1. 2) The installation also requires you to install the VMwareVirtual Disk Development Kit (VDDK) v6.7 EP1 to replicate disk changes to the destination environment. You can download it here. It requires a VMware Customer Connect account.
  1. 3) Now I’m ready to download the MGN vCenter Appliance. The URL to download will vary based on your region. For my lab environment, I use eu-west-1, so my download URL will look like the following:
    application-migration-service-eu-west-1.s3.eu-west-1.amazonaws.com/latest/vcenter-client/linux/aws-vcenter-client-installer-init.py

Downloading MGN vCenter Appliance using the curl command

Figure 4: Download Installer

  1. 4) I navigate to the directory where I download the file, and then I run the install command:
    sudo python3 aws-vcenter-client-installer-init.py --endpoint https://mgn.eu-west-1.amazonaws.com 

The installer will prompt you to enter the following details

The installer will now install the MGN vCenter client, and register with AWS MGN in your destination environment. Once this is done, all of VMs in your vCenter will be added to AWS MGN dashboard and they will have DISCOVERED state as we will detail in the next section.

Installing MGN vCenter appliance using python3

Figure 5: MGN Agent Installation

3 – Replicate source environment and cutover

Now that I’ve installed the MGN vCenter appliance, I must go to my AWS account in the same region that I specified above and connect to the AWS MGN console to start replicating the 4 VMs in my source environment. Navigate to  the MGN console. From here, I must select Source servers from the menu. The Discovered source servers filter provides a list of servers discovered by the AWS MGN client that haven’t yet begun replicating.

MGN console showing source zero active source servers

MGN console showing how to select "Discovered source servers" from the drop down

Figure 6: MGN Console

After selecting the discovered source servers, I can see 4 VMs from my source environment. The CentOS VM that I used as the AWS MGN vCenter Appliance will neither be listed here nor replicated. Also note that the actual vCenter appliance from my source environment will show in the MGN console as a VM that we should not select for replication.

MGN console showing 4 VMs from source environment including vCenter Appliance under "Discovered source servers".

Figure 7: MGN Console discovered 4 vms

From here, select the servers that you’d like to replicate. For example, to replicate the VM that runs Ubuntu, select the checkbox for the VM, go to the Replication dropdown , and choose Start data replication.

MGN console showing how to start replication for a selected source VM

Figure 8: MGN – Start replicating

This will start the snapshot replication from the vCenter source environment to my destination region on AWS. After some time, it will show as ‘Healthy’ in the Data replication status. This can be seen by switching back to Active source servers in the filtering menu. Find more details about launching Testing and Cutover instances in the AWS MGN documentation.

MGN console showing 1 VM in "Active source servers" list with the status of "Ready for testing" for "Migration lifecycle" and the status of "Healthy" for "Data replication status"

Figure 9: MGN – ready for testing

Then, I repeated the same steps to start data replication for the other two servers in my list. After some time, all three servers were showing Migration lifecycle status of Ready for testing.

In the console, three servers have the statues of "Ready for testing" and their Data replication status is "healthy"

Figure 10: 3 servers ready for testing

Conclusion

In this post we discussed the two different approaches for migrations that the AWS MGN supports. The agent-based replication is a block-level replication strategy that uses a CDP mode to provide near real-time replication and a short cutover window. It’s always preferred to use agent-based replication. However, if your source environment consists primarily of vCenter, and you can’t fulfill the requirements for installing the AWS MGN agent on every source server, then we recommend using the Snapshot based replication. In the demo above, we walked you through the steps needed to install the AWS MGN vCenter appliance in the source environment, and then showed you how to perform an agentless snapshot replication to AWS.

About the authors

Ebrahim Khiyami

Ebrahim (EB) is a Senior Migration Specialist Solutions Architect. He supports his customers in their journey to migrate, modernize and transform their workloads from on-premises, or other clouds, to AWS. Outside of work, he’s often found playing, watching, discussing or coaching soccer with his boys.

Habeeb Al Aidroos

Habeeb Al Aidroos is a Solutions Architect specializing in enterprise migration. He works closely with enterprise customers to understand their unique migration challenges and help them achieve their migration goals. When he is not working with customers, he loves to spend time with his family.

Posted on in category News

Linux Fu: Bash Strings

The content below is taken from the original ( Linux Fu: Bash Strings), to continue reading please visit the site. Remember to respect the Author & Copyright.

If you are a traditional programmer, using bash for scripting may seem limiting sometimes, but for certain tasks, bash can be very productive. It turns out, some of the limits of bash are really limits of older shells and people code to that to be compatible. Still other perceived issues are because some of the advanced functions in bash are arcane or confusing.

Strings are a good example. You don’t think of bash as a string manipulation language, but it has many powerful ways to handle strings. In fact, it may have too many ways, since the functionality winds up in more than one place. Of course, you can also call out to programs, and sometimes it is just easier to make a call to an awk or Python script to do the heavy lifting.

But let’s stick with bash-isms for handling strings. Obviously, you can put a string in an environment variable and pull it back out. I am going to assume you know how string interpolation and quoting works. In other words, this should make sense:

echo "Your path is $PATH and the current directory is ${PWD}"

The Long and the Short

Suppose you want to know the length of a string. That’s a pretty basic string operation. In bash, you can write ${#var} to find the length of $var:


#/bin/bash
echo -n "Project Name? "
read PNAME
if (( ${#PNAME} > 16 ))
then
   echo Error: Project name longer than 16 characters
else
   echo ${PNAME} it is!
fi

The “((” forms an arithmetic context which is why you can get away with an unquoted greater-than sign here. If you don’t mind using expr — which is an external program — there are at least two more ways to get there:


echo ${#STR}
expr length "${STR}"
expr match "${STR}" '.*'

Of course, if you allow yourself to call outside of bash, you could use awk or anything else to do this, too, but we’ll stick with expr as it is relatively lightweight.

Swiss Army Knife

In fact, expr can do a lot of string manipulations in addition to length and match. You can pull a substring from a string using substr. It is often handy to use index to find a particular character in the string first. The expr program uses 1 as the first character of the string. So, for example:


#/bin/bash
echo -n "Full path? "
read FFN
LAST_SLASH=0
SLASH=$( expr index "$FFN" / ) # find first slash
while (( $SLASH != 0 ))
do
   let LAST_SLASH=$LAST_SLASH+$SLASH  # point at next slash
   SLASH=$(expr index "${FFN:$LAST_SLASH}" / )  # look for another
done
# now LAST_SLASH points to last slash
echo -n "Directory: "
expr substr "$FFN" 1 $LAST_SLASH
echo -or-
echo ${FFN:0:$LAST_SLASH}
# Yes, I know about dirname but this is an example

Enter a full path (like /foo/bar/hackaday) and the script will find the last slash and print the name up to and including the last slash using two different methods. This script makes use of expr but also uses the syntax for bash‘s built in substring extraction which starts at index zero. For example, if the variable FOO contains “Hackaday”:

The first number is an offset and the second is a length if it is positive. You can also make either of the numbers negative, although you need a space after the colon if the offset is negative. The last character of the string is at index -1, for example. A negative length is shorthand for an absolute position from the end of the string. So:

Of course, either or both numbers could be variables, as you can see in the example.

Less is More

Sometimes you don’t want to find something, you just want to get rid of it. bash has lots of ways to remove substrings using fixed strings or glob-based pattern matching. There are four variations. One pair of deletions remove the longest and shortest possible substrings from the front of the string and the other pair does the same thing from the back of the string. Consider this:


TSTR=my.first.file.txt
echo ${TSTR%.*} # prints my.first.file
echo ${TSTR%%.*}  # prints my
echo ${TSTR#*fi}  # prints rst.file.txt
echo $TSTR##*fi} # prints le.txt

Transformation

Of course, sometimes you don’t want to delete, as much as you want to replace some string with another string. You can use a single slash to replace the first instance of a search string or two slashes to replace globally. You can also fail to provide a replacement string and you’ll get another way to delete parts of strings. One other trick is to add a # or % to anchor the match to the start or end of the string, just like with a deletion.


TSTR=my.first.file.txt
echo ${TSTR/fi/Fi}   # my.First.file.txt
echo ${TSTR//fi/Fi}  # my.First.File.txt
echo ${TSTR/#*./PREFIX-} # PREFIX-txt  (note: always longest match)
echo ${TSTR/%.*/.backup}  # my.backup (note: always longest match)

Miscellaneous

Some of the more common ways to manipulate strings in bash have to do with dealing with parameters. Suppose you have a script that expects a variable called OTERM to be set but you want to be sure:


REALTERM=${OTERM:-vt100}

Now REALTERM will have the value of OTERM or the string “vt100” if there was nothing in OTERM. Sometimes you want to set OTERM itself so while you could assign to OTERM instead of REALTERM, there is an easier way. Use := instead of the :- sequence. If you do that, you don’t necessarily need an assignment at all, although you can use one if you like:


echo ${OTERM:=vt100}  # now OTERM is vt100 if it was empty before

You can also reverse the sense so that you replace the value only if the main value is not empty, although that’s not as generally useful:


echo ${DEBUG:+"Debug mode is ON"}  # reverse -; no assignment

A more drastic measure lets you print an error message to stderr and abort a non-interactive shell:


REALTERM=${OTERM:?"Error. Please set OTERM before calling this script"}

Just in Case

Converting things to upper or lower case is fairly simple. You can provide a glob pattern that matches a single character. If you omit it, it is the same as ?, which matches any character. You can elect to change all the matching characters or just attempt to match the first character. Here are the obligatory examples:


NAME="joe Hackaday"

echo ${NAME^} # prints Joe Hackaday (first match of any character)
echo ${NAME^^} # prints JOE HACKADAY (all of any character)
echo ${NAME^^[a]} # prints joe HAckAdAy (all a characters)
echo ${NAME,,] # prints joe hackaday (all characters)
echo ${NAME,] # prints joe Hackaday (first character matched and didn't convert)
NAME="Joe Hackaday"
echo ${NAME,,[A-H]} # prints Joe hackaday (apply pattern to all characters and convert A-H to lowercase)

Recent versions of bash can also convert upper and lower case using ${VAR@U} and ${VAR@L} along with just the first character using @u and @l, but your mileage may vary.

Pass the Test

You probably realize that when you do a standard test, that actually calls a program:


if [ $f -eq 0 ]
then ...

If you do an ls on /usr/bin, you’ll see an executable actually named “[” used as a shorthand for the test program. However, bash has its own test in the form of two brackets:


if [[ $f == 0 ]
then ...

That test built-in can handle regular expressions using =~ so that’s another option for matching strings:


if [[ "$NAME" =~ [hH]a.k ]] ...

Choose Wisely

Of course, if you are doing a slew of text processing, maybe you don’t need to be using bash. Even if you are, don’t forget you can always leverage other programs like tr, awk, sed, and many others to do things like this. Sure, performance won’t be as good — probably — but if you are worried about performance why are you writing a script?

Unless you just swear off scripting altogether, it is nice to have some of these tricks in your back pocket. Use them wisely.

Posted on in category News

Update: EncryptedRegView v1.05

The content below is taken from the original ( Update: EncryptedRegView v1.05), to continue reading please visit the site. Remember to respect the Author & Copyright.

Fixed the external drive feature to work properly if you sign in with Microsoft account.
Be aware that in order to decrypt DPAPI-encrypted information created while you signed in with Microsoft account (On Windows 10 or Windows 11), you have to provide the random DPAPI password generated for your Microsoft account instead of the actual login password. You can find this random DPAPI password with the MadPassExt tool.
Fixed bug: EncryptedRegView failed to handle properly large Registry values with more than 16344 bytes on external Registry files.

Posted on in category News

Brad Dickinson | Office 365 Identity Management with DirSync without Exchange Server On-Premises

Office 365 Identity Management with DirSync without Exchange Server On-Premises

The content below is taken from the original (Office 365 Identity Management with DirSync without Exchange Server On-Premises), to continue reading please visit the site. Remember to respect the Author & Copyright.

This post describes how users, groups and contact are provisioned in Office 365 from the on-premises Active Directory. By using DirSync, these objects are created in and synchronized to Office 365. Without an Exchange Server and Exchange Management tools in place, it is not always obvious how these objects should be created.

The following sections describe the procedures you can follow without Exchange or the Exchange management tools in place.

IMPORTANT NOTE
The sections below only specify the basic actions you need to perform in Active Directory to have the object appear in the right place in Office 365 (user, security group, mailbox, distribution group, contact). Note that almost all properties of these objects need to be set in Active Directory. If you want to hide a distribution group from the address book or you want to configure moderation for a distribution group, you have to know the property in Active Directory that’s responsible for the setting, set the value and perform directory synchronization. You will also need to upgrade the Active Directory schema with Exchange Server 2010 schema updates. You cannot use the Exchange Server 2010 System Manager without having at least one Exchange Server 2010 role installed on-premises.

Create a user account

Create a regular user account in Active Directory. This user account will be replicated by DirSync and it will appear in the Users list in the portal (http://bit.ly/2fwE0tK).

Important: set the user logon name to a value with a suffix that matches the suffix used for logging on to Office 365. For instance, if you logon with first.last@xylos.com in Office 365, set the UPN to that value:
clip_image002

User accounts without a mailbox (or any other license) can be used in Office 365 to grant permissions such as Billing Administrator or Global Administrator. A user account like this is typically used to create a DirSync service account.

Create a user account for a user that needs a mailbox

Create a user account as above. Set the user’s primary e-mail address in the email attribute or you will get an onmicrosoft.com address only:

clip_image004

When this user is synchronized and an Exchange Online license is added in the portal, a mailbox will be created that has the E-mail address in the E-mail field as primary SMTP address. Automatically, a secondary SMTP address is created with prefix@tenantid.onmicrosoft.com:

clip_image006

What if the user needs extra SMTP addresses?

  • You cannot set extra SMTP addresses in Exchange Control Panel (or Remote PowerShell) because the object is synchronized with DirSync.
  • In the on-premises Active Directory you need to populate the proxyAddresses attribute of the user object. You can set the values in this field with ADSIEdit or Active Directory Users and Computers (Windows Server 2008 ADUC and higher with Advanced Features turned on)
  • In the proxyAddresses field, make sure that you also list the primary SMTP address with SMTP: (in uppercase) in front of the address. Secondary addresses need smtp: (in lowercase) in front of the address.
    clip_image008

Note: instead of editing the proxyAddresses field directly, you can use a free (but at this point in time beta) product: http://bit.ly/2fPkqf6. The tool adds the following tabs to Active Directory Users and Computers:

  • O365 Exchange General: set display name, Email address, additional Email addresses and even a Target Email Address (for mail redirection)
  • O365 Custom Attributes: set custom attributes in AD for replication to Office 365
  • O365 Delivery Restrictions: accept messages from, reject messages from
  • O365 Photo: this photo will appear on the user’s profile and will be used by Lync Online as well
  • O365 Delegates: to set the publicDelegates property

When a user is created in AD, you can use the additional tabs this tool provides to set all needed properties at once.

To summarize the actions for a mailbox:

  • Create a user in ADUC with the user logon name (UPN) and e-mail address to the primary e-mail address of the user (UPN and e-mail address do not have to match but it’s the most common case)
  • Make sure the user has a display name (done automatically for users if you specify first, last and full name in the AD wizard)
  • Set proxyAddresses manually or with the MessageOps add-in to specify additional e-mail addresses (with smtp: in the front) and make sure you also specify the primary e-mail address with SMTP: in the front.
  • Let DirSync create and sync the user to Office 365
  • Assign an Exchange Online license to the user. A mailbox will be created with the correct e-mail addresses.

Create a security group

Create a security group in Active Directory. The group will be synchronized by DirSync and appear in the Security Groups in the portal. The group will not appear in the Distribution Groups in Exchange Online (obviously).

Create a distribution group

Create a distribution group in Active Directory. In the properties of the group set the primary e-mail address in the E-mail address field:

clip_image010

In addition to the e-mail address, the group object also needs a display name (displayname attribute). If the distribution group in AD has an e-mail address and a display name, the group will appear in the Distribution Groups list in Exchange Online after synchronization.

Note that specifying members and alternate e-mail addresses has to be done in the local Active Directory as well. If you have installed the MessageOps add-in, you can set easily set those properties.

Create a mail-enabled distribution group

You can add a display name and e-mail address to a standard security group to mail-enable the group. After stamping those two properties, the group will appear in the list of Distribution Groups in Exchange Online. When you list groups with the Get-Group cmdlet, you will see the following:

clip_image012

You can stamp the properties manually or use the MessageOps add-in to set these properties easily.

Create a contact

Create a contact object in Active Directory. In the properties of the created object, fill in the E-mail field in the General tab.

Conclusion

Although DirSync makes it easy to create directory objects in Office 365, without an Exchange Server and the Exchange management tools it is not always obvious how to set the needed properties in order to correctly synch these objects. If you find it too much of a hassle to set the required properties on your local Active Directory objects, there are basically two things you can do:

  • Turn off Directory Synchronization and start mastering directory objects in the cloud
  • Install at least one Exchange Server 2010 SP1 so that you can use the Exchange management tools