Hi, I’m Victor from https://azureprice.net. We’ve added some new features to the tool, and I want to highlight them in this post quickly cause these things are great. There is a small gift at the end of this email that I hope will make you glad.
Interesting Fact: Based on answers in our questionnaire, azureprice.net helped 31,6% of users save from $100 to $1000 per year, and 21.1% responded from $1000 to $10 000 per year (wow!).
Alternative and Similar VMs
Using some light flavor of machine learning, we try to find the most similar VMs based on performance, CPUs, RAM, Numa nodes, etc. It works excellent for mid and large VMs, and optimization could be easily around 15-30%.
Are you lost when you see a VM name like that Standard_E32-16ads_v5? We added a quick explainer. Just hover your mouse on the VM name, and you will see something like that:
Organizations often decide to move their applications from on-premises environments to the cloud with little to no architecture changes. This migration strategy is advantageous for large-scale applications to satisfy specific business goals, such as launching a product in an accelerated timeline or exiting an on-premises data center. Using a rehost migration strategy lets customers achieve the cloud benefits, such as reducing cost, increasing flexibility, scalability, agility, and high availability, as well as simultaneously reducing migration risk due to a tight timeline.
AWS Application Migration Service (MGN) is the primary migration service recommended for rehost (lift-and-shift) migrations to AWS Cloud. AWS MGN supports both agent-based and agentless snapshot approaches to replicate servers from on-premises to AWS. In this post, we will explain the differences between the two methods and provide guidance for when to choose each one. Furthermore, we will walk through an example that demonstrates how to migrate a source environment hosted on vCenter to AWS using the Agentless snapshot based replication that has been recently added to AWS MGN.
Let’s start by discussing the agent-based replication. First, it supports block level replication from virtually any source environment. The source environment for the replication can be any supporting Operating System (OS) that is on physical servers, virtual servers that are on-premises, or virtual machines (VMs) on other cloud providers such as Azure or GCP. Second, the agent-based replication supports Continuous Data Protection (CDP). CDP keeps the source environment in sync with the replication server in near real-time after the initial replication has finished. This provides a short cutover window and makes the Recovery Point Objective (RPO) provided by AWS MGN in the sub-second range for most cases.
To receive these benefits, we recommend that customers use the agent-based replication when possible. However, organizational and security policies, or limited server access, may prevent installation of the AWS replication agent on every server. Additionally, although automation orchestrations are built on top of AWS MGN to streamline agent installation and target environment setup, learning to use these solutions and integrating them with the organization’s platform might introduce additional tasks that customer want to avoid.
If any of those scenarios applies, then the AWS MGN Agentless approach may be another solution for the migration. For the Agentless approach, you must consider the following:
AWS MGN Agentless approach currently only supports vCenter as a source environment.
AWS MGN Agentless uses a process called “snapshot shipping”, not block-level replication, which is a long running process responsible for taking periodic VMWare snapshots from the discovered source VMs and sending them to AWS. The first snapshot will include the entire disk content, and following snapshots will only sync the disk changes. After the process completes, it creates a group of EBS volumes in your target AWS account that you can later use to launch your Test or Cutover instances.
The AWS MGN vCenter client must be installed on a dedicated VM running in your vCenter environment.
Now that both migration methods have been discussed, let’s walk through an example of how to use the agentless replication to replicate a vCenter environment to AWS.
Solution overview
The following diagram depicts the AWS MGN agentless replication architecture.
To demonstrate this setup, I use an ESXi source environment that has a vCenter appliance v6.7 running on an m5.metal EC2 instance in the eu-west-1 Region. I created 4 VMs (Ubuntu 18.04, Centos8, Windows 2016, and Windows 2019). After making sure that the connectivity requirements for this replication are met (more about that later), I install the AWS MGN Agentless client, which will start discovering my VMs and replicate them to my destination Region on AWS. Next, I will walk you through the details.
Figure 1: Agentless Architecture
1 – Setting up the destination environment on AWS
Before I’m able to install the MGN vCenter Appliance into my source environment I need to complete the following initial setup in the AWS Region where I will replicate the vCenter environment to.
2) Create a Virtual Private Network (VPC) with two subnets. We will use the first subnet for AWS MGN staging area. The second subnet will be the destination subnet to which we will replicate the source environment servers. For more details on preparing MGN networking setup check Networking Setting Preparations.
3) Initialize AWS MGN: This process is required when you use AWS MGN for the first time. During initialization you will be directed to create Replication Settings template. This process also creates the IAM Roles needed for the service to work. For more details check Initialize Application Migration Service.
2 – Setting up the source environment on vCenter
I chose to download and install the MGN vCenter Appliance on CentOS8 VM in my source environment. Before I start the installation, I make sure the following networking requirements are satisfied on the VM. For more details on setting up networking for vCenter refer to this link.
Egress TCP 443 from CentOS8 VM to the vCenter API. To check this connectivity in my lab, I use Telnet (or any other connectivity test tool) to connect from CentOS VM to the vCenter endpoint, and I confirm that it’s connected.
Figure 2: Telnet vcsa
Egress TCP 443 from CentOS8 to AWS MGN API, which is eu-west-1.amazon.com in my case. Make sure that you replace your actual destination region in the endpoint if you use a region that is different from eu-west-1.
Figure 3: Telnet MGN
Once the networking configuration has been verified, the next step is to download and install the AWS MGN vCenter Appliance into the CentOS VM in your source environment.
1) The MGN vCenter Appliance installer requires Python3, so before I start the download, I install I connect to CentOS VM and install Python3
sudo yum install python3 -y
2) The installation also requires you to install the VMwareVirtual Disk Development Kit (VDDK) v6.7 EP1 to replicate disk changes to the destination environment. You can download it here. It requires a VMware Customer Connect account.
3) Now I’m ready to download the MGN vCenter Appliance. The URL to download will vary based on your region. For my lab environment, I use eu-west-1, so my download URL will look like the following:
The installer will prompt you to enter the following details
AWS Access Key ID: notes from step 1.1.
AWS Secret Access Key: notes from step 1.1.
AWS Region Name: destination region on AWS.
vCenter IP or hostname: The IP or hostname where the server appliance is running.
vCenter port: This is usually 443. If you have vCenter listening on a different port, specify that here. If not, just hit enter.
vCenter username: The username to log in to vCenter. Check this link for details on permissions that this vCenter user needs.
vCenter password: The password associated with that vCenter username.
For the next question on vCenter root CA certificate, I pressed Enter to disable SSL certification validation.
Path to VDDK tarball: This is where I provided the location of the VDDK tar file that I downloaded in step 2.2.
For the next two questions on resource tags, I use default and press Enter.
The installer will now install the MGN vCenter client, and register with AWS MGN in your destination environment. Once this is done, all of VMs in your vCenter will be added to AWS MGN dashboard and they will have DISCOVERED state as we will detail in the next section.
Figure 5: MGN Agent Installation
3 – Replicate source environment and cutover
Now that I’ve installed the MGN vCenter appliance, I must go to my AWS account in the same region that I specified above and connect to the AWS MGN console to start replicating the 4 VMs in my source environment. Navigate to the MGN console. From here, I must select Source servers from the menu. The Discovered source servers filter provides a list of servers discovered by the AWS MGN client that haven’t yet begun replicating.
Figure 6: MGN Console
After selecting the discovered source servers, I can see 4 VMs from my source environment. The CentOS VM that I used as the AWS MGN vCenter Appliance will neither be listed here nor replicated. Also note that the actual vCenter appliance from my source environment will show in the MGN console as a VM that we should not select for replication.
Figure 7: MGN Console discovered 4 vms
From here, select the servers that you’d like to replicate. For example, to replicate the VM that runs Ubuntu, select the checkbox for the VM, go to the Replication dropdown , and choose Start data replication.
Figure 8: MGN – Start replicating
This will start the snapshot replication from the vCenter source environment to my destination region on AWS. After some time, it will show as ‘Healthy’ in the Data replication status. This can be seen by switching back to Active source servers in the filtering menu. Find more details about launching Testing and Cutover instances in the AWS MGN documentation.
Figure 9: MGN – ready for testing
Then, I repeated the same steps to start data replication for the other two servers in my list. After some time, all three servers were showing Migration lifecycle status of Ready for testing.
Figure 10: 3 servers ready for testing
Conclusion
In this post we discussed the two different approaches for migrations that the AWS MGN supports. The agent-based replication is a block-level replication strategy that uses a CDP mode to provide near real-time replication and a short cutover window. It’s always preferred to use agent-based replication. However, if your source environment consists primarily of vCenter, and you can’t fulfill the requirements for installing the AWS MGN agent on every source server, then we recommend using the Snapshot based replication. In the demo above, we walked you through the steps needed to install the AWS MGN vCenter appliance in the source environment, and then showed you how to perform an agentless snapshot replication to AWS.
The content below is taken from the original ( Linux Fu: Bash Strings), to continue reading please visit the site. Remember to respect the Author & Copyright.
If you are a traditional programmer, using bash for scripting may seem limiting sometimes, but for certain tasks, bash can be very productive. It turns out, some of the limits of bash are really limits of older shells and people code to that to be compatible. Still other perceived issues are because some of the advanced functions in bash are arcane or confusing.
Strings are a good example. You don’t think of bash as a string manipulation language, but it has many powerful ways to handle strings. In fact, it may have too many ways, since the functionality winds up in more than one place. Of course, you can also call out to programs, and sometimes it is just easier to make a call to an awk or Python script to do the heavy lifting.
But let’s stick with bash-isms for handling strings. Obviously, you can put a string in an environment variable and pull it back out. I am going to assume you know how string interpolation and quoting works. In other words, this should make sense:
echo "Your path is $PATH and the current directory is ${PWD}"
The Long and the Short
Suppose you want to know the length of a string. That’s a pretty basic string operation. In bash, you can write ${#var} to find the length of $var:
#/bin/bash
echo -n "Project Name? "
read PNAME
if (( ${#PNAME} > 16 ))
then
echo Error: Project name longer than 16 characters
else
echo ${PNAME} it is!
fi
The “((” forms an arithmetic context which is why you can get away with an unquoted greater-than sign here. If you don’t mind using expr — which is an external program — there are at least two more ways to get there:
echo ${#STR}
expr length "${STR}"
expr match "${STR}" '.*'
Of course, if you allow yourself to call outside of bash, you could use awk or anything else to do this, too, but we’ll stick with expr as it is relatively lightweight.
Swiss Army Knife
In fact, expr can do a lot of string manipulations in addition to length and match. You can pull a substring from a string using substr. It is often handy to use index to find a particular character in the string first. The expr program uses 1 as the first character of the string. So, for example:
#/bin/bash
echo -n "Full path? "
read FFN
LAST_SLASH=0
SLASH=$( expr index "$FFN" / ) # find first slash
while (( $SLASH != 0 ))
do
let LAST_SLASH=$LAST_SLASH+$SLASH # point at next slash
SLASH=$(expr index "${FFN:$LAST_SLASH}" / ) # look for another
done
# now LAST_SLASH points to last slash
echo -n "Directory: "
expr substr "$FFN" 1 $LAST_SLASH
echo -or-
echo ${FFN:0:$LAST_SLASH}
# Yes, I know about dirname but this is an example
Enter a full path (like /foo/bar/hackaday) and the script will find the last slash and print the name up to and including the last slash using two different methods. This script makes use of expr but also uses the syntax for bash‘s built in substring extraction which starts at index zero. For example, if the variable FOO contains “Hackaday”:
${FOO} -> Hackaday
${FOO:1} -> ackaday
${FOO:5:3} -> day
The first number is an offset and the second is a length if it is positive. You can also make either of the numbers negative, although you need a space after the colon if the offset is negative. The last character of the string is at index -1, for example. A negative length is shorthand for an absolute position from the end of the string. So:
${FOO: -3} -> day
${FOO:1:-4} -> ack
${FOO: -8:-4} -> Hack
Of course, either or both numbers could be variables, as you can see in the example.
Less is More
Sometimes you don’t want to find something, you just want to get rid of it. bash has lots of ways to remove substrings using fixed strings or glob-based pattern matching. There are four variations. One pair of deletions remove the longest and shortest possible substrings from the front of the string and the other pair does the same thing from the back of the string. Consider this:
TSTR=my.first.file.txt
echo ${TSTR%.*} # prints my.first.file
echo ${TSTR%%.*} # prints my
echo ${TSTR#*fi} # prints rst.file.txt
echo $TSTR##*fi} # prints le.txt
Transformation
Of course, sometimes you don’t want to delete, as much as you want to replace some string with another string. You can use a single slash to replace the first instance of a search string or two slashes to replace globally. You can also fail to provide a replacement string and you’ll get another way to delete parts of strings. One other trick is to add a # or % to anchor the match to the start or end of the string, just like with a deletion.
Some of the more common ways to manipulate strings in bash have to do with dealing with parameters. Suppose you have a script that expects a variable called OTERM to be set but you want to be sure:
REALTERM=${OTERM:-vt100}
Now REALTERM will have the value of OTERM or the string “vt100” if there was nothing in OTERM. Sometimes you want to set OTERM itself so while you could assign to OTERM instead of REALTERM, there is an easier way. Use := instead of the :- sequence. If you do that, you don’t necessarily need an assignment at all, although you can use one if you like:
echo ${OTERM:=vt100} # now OTERM is vt100 if it was empty before
You can also reverse the sense so that you replace the value only if the main value is not empty, although that’s not as generally useful:
echo ${DEBUG:+"Debug mode is ON"} # reverse -; no assignment
A more drastic measure lets you print an error message to stderr and abort a non-interactive shell:
REALTERM=${OTERM:?"Error. Please set OTERM before calling this script"}
Just in Case
Converting things to upper or lower case is fairly simple. You can provide a glob pattern that matches a single character. If you omit it, it is the same as ?, which matches any character. You can elect to change all the matching characters or just attempt to match the first character. Here are the obligatory examples:
NAME="joe Hackaday"
echo ${NAME^} # prints Joe Hackaday (first match of any character)
echo ${NAME^^} # prints JOE HACKADAY (all of any character)
echo ${NAME^^[a]} # prints joe HAckAdAy (all a characters)
echo ${NAME,,] # prints joe hackaday (all characters)
echo ${NAME,] # prints joe Hackaday (first character matched and didn't convert)
NAME="Joe Hackaday"
echo ${NAME,,[A-H]} # prints Joe hackaday (apply pattern to all characters and convert A-H to lowercase)
Recent versions of bash can also convert upper and lower case using ${VAR@U} and ${VAR@L} along with just the first character using @u and @l, but your mileage may vary.
Pass the Test
You probably realize that when you do a standard test, that actually calls a program:
if [ $f -eq 0 ]
then ...
If you do an ls on /usr/bin, you’ll see an executable actually named “[” used as a shorthand for the test program. However, bash has its own test in the form of two brackets:
if [[ $f == 0 ]
then ...
That test built-in can handle regular expressions using =~ so that’s another option for matching strings:
if [[ "$NAME" =~ [hH]a.k ]] ...
Choose Wisely
Of course, if you are doing a slew of text processing, maybe you don’t need to be using bash. Even if you are, don’t forget you can always leverage other programs like tr, awk, sed, and many others to do things like this. Sure, performance won’t be as good — probably — but if you are worried about performance why are you writing a script?
Unless you just swear off scripting altogether, it is nice to have some of these tricks in your back pocket. Use them wisely.
The content below is taken from the original ( Update: EncryptedRegView v1.05), to continue reading please visit the site. Remember to respect the Author & Copyright.
Fixed the external drive feature to work properly if you sign in with Microsoft account.
Be aware that in order to decrypt DPAPI-encrypted information created while you signed in with Microsoft account (On Windows 10 or Windows 11), you have to provide the random DPAPI password generated for your Microsoft account instead of the actual login password. You can find this random DPAPI password with the MadPassExt tool.
Fixed bug: EncryptedRegView failed to handle properly large Registry values with more than 16344 bytes on external Registry files.
We recently announced a new goal of equipping more than 40 million people with Google Cloud skills. To help achieve this goal, we’re hosting Cloud Learn from Dec. 8-9 (for those in Europe, the Middle East, or Africa, the event will be from Dec. 9-10 and for those in Japan, you can access the eventhere), a no-cost digital training event for developers, IT professionals, and data practitioners at all career levels. The interactive event will have live technical demos, Q&As, career development workshops, and more covering everything from Google Cloud fundamentals to certification prep.
Here’s a more in-depth look at what to expect from Cloud Learn:
Hear from Google Cloud executives and customers
Thomas Kurian, Google Cloud’s CEO, and I will kick off the first day by discussing how you can uplevel your career. The second day will begin with technical leaders from Twitter, Lloyds Banking Group, and Ingka Group Digital speaking with John Jester, our vice president of customer experience, about the impact of Google Cloud training and certifications they’ve seen in their organizations.
Afterwards, you can choose from role-based tracks and join the training sessions most relevant to you.
Training for developers
For developers, Kubernetes expert Kaslin Fields will be guiding you through the following trainings during the first day: Introduction to Building with Kubernetes, Create and Configure Google Kubernetes Engine (GKE) Clusters, Deploy and Scale in Kubernetes, and Securing GKE for Your Google Cloud Platform Access.
Google customer engineers Murriel Perez McCabe and Jay Smith will discuss how to prepare for the Google Cloud Professional Cloud Developer and Professional Cloud DevOps Engineer certifications on the second day. Jay will also walk you through a live demo of how to build a serverless app that creates PDF files with Cloud Run.
Carter Morgan, a Google Cloud developer advocate, will end the second day with a session on actionable strategies for managing imposter syndrome in tech.
Learning opportunities for IT professionals
IT professionals will have the opportunity on day one to learn from Jasen Baker, a technical trainer, how to get started with Google Cloud. Jasen will walk you through how to execute compute, store, and secure your data as well deploy and monitor applications.
On the second day, you can hear from Google Cloud Certified Fellow Konrad Clapa and Cori Peele, a Google Cloud customer engineer, about how to prepare for Google Cloud’s Associate Cloud Engineer and Professional Cloud Architect certifications.
Google Cloud experts will also take you through a live demo of how to create virtual machines that run different operating systems using the Google Cloud Console and the gcloud command line. Day two will conclude with a discussion from leadership consultant, Selena Rezvani, on how to negotiate for yourself at work, and speak up for what you want and need.
Training sessions for data practitioners
Lak Lakshmanan, Google Cloud’s analytics and AI solutions director, and product manager Leigha Jarett will show you how to use BigQuery, Cloud SQL, and Spark to dive into recommendation and prediction systems on the first day. They’ll also teach you how to use real time dashboards and derive insights using machine learning.
Author Dan Sullivan and Google Cloud learning portfolio manager Doug Kelly will begin the second day with a discussion on how to earn Google Cloud’s Professional Data Engineer and Professional Machine Learning Engineer certifications. You’ll also learn how Google Cloud Video Intelligence makes videos searchable and discoverable by extracting metadata with an easy to use REST API through a live demo on day two.
Cross cultural business speaker Jessica Chen will end the last day with actionable communication tips and techniques to lead in a virtual and hybrid world.
Register here to save your virtual seat at Cloud Learn.
The newly released virtual machines selector lets you quickly find the Azure VMs and disk storage options that meet your requirements. Localized in 26 different languages, the tool guides your selection based on workload categories, operating systems, and Azure regions of your choice. The virtual machine selector is integrated with the pricing calculator.
The content below is taken from the original ( New: Product Key Scanner v1.00), to continue reading please visit the site. Remember to respect the Author & Copyright.
Product Key Scanner is a tool that scans the Registry of Windows Operating system and finds the product keys of Windows and other Microsoft products. You can scan the Registry of your current running system, as well as you can scan the Registry from external hard drive plugged to your computer.
When scanning the product keys of your current running system, you can also search product key stored in BIOS, and search product keys by using WMI.
“They don’t make them like they used to.” It might be a cliché, it might not even be entirely true, but there’s something special about owning a piece of hardware that was built to a much higher standard than most of its contemporaries, whether it’s that bulletproof Benz from 1992 or that odd fridge from 1987 that just seems to last forever. For laptop aficionados, the Thinkpad series from IBM and Lenovo is the ne plus ultra: beloved for their sturdy construction and rich feature set, they have been used anywhere from the United Nations to the International Space Station. The T60 and T61 (introduced in 2006) are especially famous, being the last generation sporting IBM logos and such classic features as 4:3 displays and infrared ports.
The thing is, even the best hardware eventually becomes obsolete when it can no longer run modern software: with a 2.0 GHz Core Duo and 3 GB of RAM you can still browse the web and do word processing today, but you can forget about 4K video or a 64-bit OS. Luckily, there’s hope for those who are just not ready to part with their trusty Thinkpads: [Xue Yao] has designed a replacement motherboard that fits the T60/T61 range, bringing them firmly into the present day. The T700 motherboard is currently in its prototype phase, with series production expected to start in early 2022, funded through a crowdfunding campaign.
Designing a motherboard for a modern CPU is no mean feat, and making it fit an existing laptop, with all the odd shapes and less-than-standard connections, is even more impressive. The T700 has an Intel Core i7 CPU with four cores running at 2.8 GHz, while two RAM slots allow for up to 64 GB of DDR4-3200 memory. There are modern USB-A and USB-C ports as well as well as a 6 Gbps SATA interface and two m.2 slots for your SSDs.
As for the display, the T700 motherboard will happily connect to the original screens built into the T60/T61, or to any of a range of aftermarket LED based replacements. A Thunderbolt connector is available, but only operates in USB-C mode due to firmware issues; according to the project page, full support for Thunderbolt 4 is expected once the open-source coreboot firmware has been ported to the T700 platform.
We love projects like this that extend the useful life of classic computers to keep them running way past their expected service life. But impressive though this is, it’s not the first time someone has made a replacement motherboard for the Thinkpad line; we covered a project from the nb51 forum back in 2018, which formed the basis for today’s project. We’ve seen lots of other useful Thinkpad hacks over the years, from replacing the display to revitalizing the batteries. Thanks to [René] for the tip.
Quadcopter type drones can be flown indoors, but unless you have a lot of space, it usually just ends in a crash. The prospect of being hit in the face by the propellor blades, spinning at 10k RPM doesn’t bear thinking about, and then there’s the noise. So, as a solution for indoor photography, or operating in public spaces, they are not viable. Japanese mobile operator DOCOMO has a new take on an old idea; the blimp. But, surely even a helium filled vehicle needs blades to steer around the room, we hear you cry? Not so, if you use a pair of specialised ultrasonic transducer arrays to move the air instead! (Video, embedded below)
Details are scarce, but DOCOMO have fitted a helium balloon with modules on either side that can produce a steerable thrust, allowing the vehicle to effect all the expected aerial manoeuvres with ease and grace. The module at the bottom contains the control electronics, an upwards facing RGB LED for some extra bling, and of course a video camera to capture those all-important video shots.
We’d love to find a source for those ultrasonic transducer devices, and can only guess at the physical arrangement that allows for air to pass in one direction only, to effect a net thrust. We can find a few research papers hinting at the ability to use ultrasound to propel through air, like this one (bah! IEEExplore Paywall!) but to our knowledge, this technology is not quite in the hands of hackers just yet.
Microsoft 365 administrators need to manage users and their licenses efficiently to reduce license costs. Also, it is necessary to understand users’ requirements before assigning the license and identify unused licenses to optimize license management.
If you are a small organization, you can use Microsoft 365 admin center to assign and monitor licenses. But admin center is not feasible for large organizations. In this case, you can use PowerShell cmdlets to manage licenses. But, if you are new to PowerShell, it will be challenging to assign or remove licenses in bulk and generate license reports.
To overcome the difficulties, we have created an All-in-One PowerShell script for M365 license management. Yes! A single script can perform more than 10 Office 365 license management and reporting activities.
Allows to perform 6 license management actions that includes, adding or removing licenses in bulk.
License Name is shown with its friendly name like ‘Office 365 Enterprise E3’ rather than ‘ENTERPRISEPACK’.
The script can be executed with an MFA enabled accounttoo.
Exports the report result to CSV.
Exports license assignment and removal log file.
The script is scheduler-friendly. i.e., you can pass the credentials as a parameter instead of saving them inside the script.
Office 365 License Reporting and Management using PowerShell Script:
As earlier said, you can use this script to generate various license reports and to perform license management actions. We have listed a few significant actions here.
You can use the above format to automate the report generation. If the admin account has MFA, you need to disable MFA based on the Conditional Access policy to make it work.
Method 3: To perform multiple actions without executing the script several times, you can use –MultipleActionsModeparam.
It will show the main menu until you terminate the script by providing input as 0.
Unlock the Full Potential of this Script
The script supports the following in-built params to ease your Office 365 license management and reporting needs.
1.Action – To directly specify a reporting or management action instead of selecting it from main menu.
2.LicenseName – To get users with specific License Plan.
3.UserName and Password – To schedules the PowerShell script without interactive login.
4.MultipleActionsMode– To show main menu again after completing an action. It will help you to perform multiple actions continuously without executing the script again and again.
Export all Licensed Users in Office 365:
To get a list of licensed users in your organization, run the script as follows or select the required action from the main menu.
.\O365LicenseReportingAndManagement.ps1 -Action 1
Using this report, you can find licensed users and their assigned licenses, license friendly names, account status, etc.
Sample Output:
Note: You can refer our earlier blog to get detailed license reportalong with the assigned services and their status.
Get Unlicensed Users in Office 365 using PowerShell:
To view all the unlicensed users in your organization, run the below code directly or choose the required option from the menu.
.\O365LicenseReportingAndManagement.ps1 -Action 2
By referring to this report, admins can identify users who don’t have any license plan and assign them a license, if required.
Sample Output:
Export List of Users with a Specific License Type:
To get Office 365 users with a specific license plan, run the script as follows.
.\O365LicenseReportingAndManagement.ps1 -Action 3
It will ask for a license plan. After entering the license plan, the script will list the licensed users matching that license.
For example, to get a list of users with the E3 license, enter “Contoso:EnterprisePack” when the script prompts for the license name.
You can also pass the License Plan as a parameter as shown below.
The sample output lists all the users with E3 license.
Get Disabled Users Still Licensed in Office 365:
Generally, former employees’ accounts are disabled after they leave the office. In some situations, you may want to recover the Office 365 license from the departed users so that you can assign them to some other users. To find licensed disabled users, run the script as follows.
.\O365LicenseReportingAndManagement.ps1 -Action 4
The exported report contains UPN, Display Name, License Plan, License Plan Friendly Name, Department, and Job Title.
Sample Output:
Office 365 License Usage Report:
Office 365 license usage report lists all the subscriptions available in your organization, along with the active license count and assigned license count. Bu referring to this report, you can calculate the unassigned license count.
To generate a license usage report, execute the script and select the needed action from the menu. Else, directly run the below code.
.\O365LicenseReportingAndManagement.ps1 -Action 5
Sample Output:
Bulk Assign Office 365 License using PowerShell:
Users must have Office 365 license to use any Microsoft 365 services. Admins can assign the license(s) in bulk by using our PowerShell script. We have covered the most requested use cases below.
Assign a License to Users from CSV:
To assign Office 365 license to multiple users using a CSV file, run the PowerShell script as follows.
The script will ask for the License Name and CSV file path. We have given an example for the input details below.
Input File Format:
The input CSV/txt file must follow the format below: UPN of users separated by new line without a header.
Output Log File- Sample
After the script execution, you can refer to the ‘Office365_License_Assignment_Log’ file to know about the license assignment result.
Assign Multiple License to List of Users:
To assign multiple licenses to Microsoft 365 users, execute the script as shown below.
.\O365LicenseReportingAndManagement.ps1 -Action 7
It will ask for the CSV file location and the licenses to be assigned. You can enter the license names in the following format- contoso:EnterprisePack,contoso:Flow_Free
For eg,
Set Usage Location in Office 365
Before a license can be assigned to the users, they must have ‘Usage Location’. Else, you will receive a ‘Licence cannot be assigned to a user without a usage location specified’ error. To set usage location for Office 365 users, we have given the ‘LicenseUsageLocation’ param.
While running the license assignment use cases, you can specify the ‘LicenseUsageLocation’ param to set usage location to users whose usage location value is empty.
For example,
.\O365LicenseReportingAndManagement.ps1 -Action 7 –LicenseUsageLocation US
Or
.\O365LicenseReportingAndManagement.ps1 –LicenseUsageLocation US
Unassign Licenses from Office 365 Users using PowerShell:
Identifying and reclaiming unused licenses help to optimize the license usage and reduce the license cost. We have covered the most requested license removal techniques below.
Remove All Licenses from a User:
When a user no longer need licenses or leaves the organization, you can remove all the assigned licenses from that user. By using the below format, you can remove all the licenses from a user account.
.\O365LicenseReportingAndManagement.ps1 -Action 8
After running the above format, the script will ask to enter the user’s identity to unassign the licenses. You can provide UserPrincipalName as an identity.
Remove All Office 365 Licenses for a List of Users in CSV:
When you want to regain the license(s) from former employees and inactive users, you can unassign the licenses in bulkby executing our script in the below format.
.\O365LicenseReportingAndManagement.ps1 -Action 9
The script will prompt for input CSV. After entering the file path, the script removes the license from user accounts mentioned in the input CSV. After the execution, you can refer “Office365_License_Removal_Log” file to know about the license removal status.
Remove Specific License from All Users:
You can choose this use case in the following scenarios.
When you want to move from one license plan to another license plan. For e.g., E3 to E5.
After running the above format, the script will ask for the license plan to be removed and then proceeds with license removal. In the end, you can refer to the license removal audit log file for the status.
Remove Licenses from Disabled Users:
Most organizations disable the departed users’ accounts instead of deleting them. To control cost and gain unused licenses, you can remove licenses from disabled users.
To remove all the licenses from all the disabled users, run the script as follows,
.\O365LicenseReportingAndManagement.ps1 -Action 1 –UserName [email protected] -Password XX
If the admin account has MFA, you need to disable MFA based on the Conditional Access policy to make it work.
How to Get Office 365 License Reports in a Simple Way?
If you are tired of running PowerShell cmdlets or scripts, you can try AdminDroid Office 365 Reporting tool. The tool provides 20+ license reportsfree of cost to manage your organization’s license needs efficiently.
Additionally, AdminDroid provides over 100+ reports and a handful of dashboardscompletely for free. It includes reports on Users, Licenses, Groups, Group Members, Devices, Login Activities, Password Changes, License Changes, and more. The free version allows you to perform customization, scheduling, and exporting too. Download Free Office 365 reporting tool by AdminDroidand see how it helps you.
Each report provides AI-powered graphical analysis to gain insights and understand the data in a visually appealing manner.
Besides, AdminDroid Microsoft 365 reporting toolprovides 1500+ reportsto get detailed reports on various Office 365 services like Azure AD, Exchange Online, SharePoint Online, Microsoft Teams, OneDrive for Business, Streams, One Note, Yammer, etc.
I hope this blog will help you in managing Office 365 licenses and generating license reports. You can share your license management techniques with other admins and us through the comment section.
Not since the Audio-Technica Sound Burger, or Crosley’s semi-recent imitation, have we seen such a portable unit. But that’s not even the most notable part — this thing runs inversely to normal record players. Translation: the record stands still while the the player spins, and it sends the audio over Bluetooth to headphones or a speaker.
Inside this portable player is an Arduino Nano driving a 5 VDC motor with a worm gear box. There really isn’t too much more to this build — mostly power, a needle cartridge, and a Bluetooth audio transmitter. There’s a TTP223 touch module on the lid that allows [JGJMatt] to turn it off with the wave of a hand.
[JGJMatt] says this is a prototype/work-in-progress, and welcomes input from the community. Right now the drive system is good and the Bluetooth is stable and able, but the tone arm has some room for improvement — in tests, it only played a small section of the record and skidded and skittered across the innermost and outermost parts. Now, [JGJMatt] is trying two-part arm approach where the first bit extends and locks into position, and then a second arm extending from there and moves around freely.
Commercial record players can do more than just play records. If you’ve got an old one that isn’t even good enough for a thrift store copy of a Starship record, you could turn it into a pottery wheel or a guitar tremolo.
In this blog post I highlight how you can accelerate your application modernization journey by taking advantage of Anthos on bare metal’s support for running on OpenStack. I also introduce the recently published “Deploy Anthos on bare metal on OpenStack” and “Configure the OpenStack Cloud Provider for Kubernetes” guides. I point to different starting points of these guides depending on expertise and OpenStack environment, so you can leverage it.
What is OpenStack?
OpenStack is an open source platform that enables you to manage large pools of compute, storage and networking resources. It provides you a uniform set of APIs and an interactive dashboard to manage and control your resources. If you have your own bare metal servers, you can install OpenStack in them and expose your hardware resources to others (other teams or outsiders) to provision VMs, Load Balancers and other compute services. You can think of OpenStack as the equivalent of the software that runs on top of the Google data centers enabling our users to provision compute resources and services via the Google Cloud Console.
Are you using OpenStack?
Many enterprises who were early to invest on acquiring their own hardware and networking equipment needed a uniform platform to manage their infrastructure. Whilst the computing resources were available, there had to be an easy way to manage and expose them to higher-level teams in an easily consumable way. OpenStack was one of the very few platforms that was available for enterprises as early as 2010 to help manage their infrastructure. All the complexity of scheduling, networking and storage in the bare metal environment were taken care of by OpenStack. Thus, multiple of our customers have been long term users of OpenStack to manage their infrastructure. If your enterprise is one that has already invested in your own bare metal and storage servers and networking equipment, then it is highly likely that you might be using OpenStack.
What is Anthos on Bare Metal?
Anthos is Google’s solution to help you manage your Kubernetes clusters running anywhere: Google Cloud, on-premises and other cloud providers. Anthos enables you to modernize your applications faster and establish consistency across all your environments. Anthos on bare metal is the flavor of Anthos that lets you run Anthos on physical servers managed and maintained by you. By doing this you can leverage your existing investments in hardware, platform and networking infrastructure to host Kubernetes clusters and take advantage of the benefits of Anthos-centralized management, increased flexibility, and developer agility — even for your most demanding applications. An important thing to notice is that when you hear “Anthos clusters” we are referring to Kubenertes clusters backed/managed by Anthos.
Can Anthos on Bare Metal make your OpenStack workloads cloud native?
The short answer is—Yes! It can.
We have learnt from our customers that their investments in their own data centers and large fleets of bare metal servers are quite important to them. It is important due to various reasons like data residency requirements, higher level of control over resources, not an easy expense to offset and existing talent who are well versed with the current environment. One or more of these reasons or even others may apply to you as well.
However, with the motion towards containerization and cloud native application development, we want to help you take advantage of the benefits of this application modernization drive whilst continuing to run applications in your OpenStack deployments. We built Anthos to just do that. Anthos on bare metal brings as much of Google Cloud as possible closer to your OpenStack deployment. It lets you run your workload in your well tuned OpenStack infrastructure whilst enabling you to continuously modernize your application stack with key features from Google Cloud – like service mesh, central configuration management, monitoring & alerting and more. Whilst OpenStack helps you manage your underlying compute resources, Anthos on bare metal will enable the management of Kubernetes clusters running in your existing infrastructure. To help you get started with this journey using Anthos on bare metal on OpenStack we have recently published two guides.
Install Anthos on Bare Metal on OpenStack
The recently published “Deploy Anthos on bare metal on OpenStack” guide takes you through the steps of how to install Anthos on bare metal in an existing OpenStack environment. The guide assumes that you have an OpenStack environment that is similar to the one shown in the following diagram. It guides you to install one hybrid cluster on 2 OpenStack virtual machines.
Notice that an OpenStack tenant network has been created which connects to a provider network of this OpenStack deployment. This tenant network will serve as the layer 2 network connection between the virtual machines that we will use to install Anthos on bare metal. This setup has 3 virtual machines:
An admin workstation: this is the virtual machine from which we will carry out the installation process. As part of the installation of Anthos on bare metal, a bootstrap cluster is created. This cluster will be created in the admin workstation and be deleted once the installation is complete.
A control plane node: this is the virtual machine that will run the control plane components of the Anthos cluster.
A worker node: this is the virtual machine that will run the application workloads.
The control plane node and the worker node, both together make up the Anthos on bare metal cluster.
We also have provisioned an OpenStack Octavia Load Balancer on the same tenant network to which the virtual machines are attached to. The load balancer is also connected to the provider network via a router. This load balancer setup is important to the follow up part to this installation guide where we show how to “Configure the OpenStack Cloud Provider for Kubernetes“. By configuring the OpenStack cloud provider for Kubernetes in our Anthos cluster, we can expose our Kubernetes services outside the tenant network. With the OpenStack cloud provider configured, whenever you create a Kubernetes service of type LoadBalancer, the Octavia load balancer is used to assign an IP address to this service. This in turn makes the service reachable from the external provider network.
To make following these two guides easy, we have provided all the building blocks required in our public anthos-samples repository. Depending on how much of the setup you already have, you can start at any of the following four stages and continue till the end:
You have no OpenStack environment to experiment with: start by following the “Deploy OpenStack Ussuri on GCE VM” guide to get an OpenStack environment running on a Google Computer Engine VM.
You have your own OpenStack deployment but don’t have an environment configured as shown in the diagram earlier: start by following the “Provision the OpenStack VMs and network setup using Terraform” guide to configure a setup exactly as shown in the diagram. This guide automates the setting up process using Terraform. Thus, you can easily clean your environment once you are done.
You have your own OpenStack deployment and have a setup similar to the diagram already configured: you can directly start following the “Deploy Anthos on bare metal on OpenStack” guide.
You have your own OpenStack deployment, have a setup matching the diagram and have Anthos on bare metal installed: follow the “Configure the OpenStack Cloud Provider for Kubernetes“ guide to expose Kubernetes services outside the tenant network using the OpenStack Octavia Loan Balancer.
I recommend that if you have to start from the beginning and go through all the four stages, then following the “Getting started” guide in the GitHub repository is the best place. The steps in the repository does not assume you have an existing OpenStack environment and provides you with all the necessary details to be able to access your services running inside the Anthos cluster — inside the OpenStack virtual machine — inside OpenStack — inside the Google Compute Engine VM — inside Google Cloud. That’s a lot of layers! 🙂
Mary Jo Foley (00:00):
Hi, you’re listening to Petri.com’s MJF Chat show. I am Mary Jo Foley, AKA your Petri.Com community magnate. And I am here to interview tech industry experts about various topics that you, our readers and listeners want to know about. Today’s chat is going to be all about the many Windows Deployment options that are out there. And my special guest, Donna Ryan, who is a Microsoft Mobility, MVP knows a lot about this topic. Hi Donna, and thank you so much for doing this chat and very nice to meet you virtually for the first time.
Donna Ryan (00:42):
It’s lovely to meet you as well, Mary Jo. It’s a privilege and an honor to be on the podcast and just glad to be here.
Mary Jo Foley (00:50):
Ah, thank you so much. So when we came up with the idea of what you were going to talk about, it turned out that this was a real hot button topic, and one laden with acronyms, which I’m going to try to remember to explain and spell out, but I’m sure you’ll help me do that as well. There are a lot of topics we could cover here, everything from Intune to Endpoint Manager, to Autopilot, Azure Virtual Desktop, Windows 365. And on top of that, we got a lot of questions for you on Twitter. So I want to start out with a couple of my own questions.
Donna Ryan (01:24):
Okay.
Mary Jo Foley (01:25):
I am curious, like how do you keep this all in your head? I was thinking at the highest level, do you have some kind of a framework, or a hierarchy, or do you have like a secret periodic table of deployment options on your wall somewhere? Like how do you keep track of all this?
Donna Ryan (01:41):
It’s just experience and I guess, prioritization of what I’m going to remember. I mean, I’m a huge, huge deployment nerd. And so for me, I can remember those things, you know, fairly well.
Mary Jo Foley (01:55):
Nice, nice. There’s so many acronyms in this space, when I was coming up with things and looking at reader questions, I’m like, wow, there are acronyms I’ve never even seen here. And I’ve seen a lot of them over my time. So let’s see, I wanted to talk briefly about Autopilot with you. Because when we were talking about doing this chat, you had mentioned Microsoft had just recently introduced some breaking changes to Autopilot around user assignment. And some people were a little upset about that. So could you dig in there? What’s going on with that?
Donna Ryan (02:26):
So Microsoft had made a change on how certain scenarios and an option that works within Autopilot to address a potential concern with devices being reused and reprovisioned. What they did is they stopped the ability to have the end user’s name already displayed on the OOBE prompt, which some folks in organizations really like having that name right there. And the other component that changed is if the device had been enrolled via pre-provisioning formerly known as White Glove or Self Deploying mode, if the device needed to be redeployed, it needed to be fully deleted out of Intune, which hadn’t been a requirement. Reason being with that reusing of a hardware is there is the potential for devices to not be properly offboarded and then shipped to a new organization, you know, with identifiable data present. And so, yeah, it didn’t break Autopilot completely. It’s removing some features and adding an additional step if you’re re-enrolling. But some people weren’t thrilled by that, but I suppose it’s also understandable.
Mary Jo Foley (03:53):
Right, right. What do you think? Do you think it’s justifiable or a good idea that they were doing this?
Donna Ryan (04:00):
I think it’s generally a good idea. I know over the years there’s been some on an occasion devices getting returned that had been enrolled that are still enrolled in other tenancies and it happens. And so if that helps to mitigate that, I suppose it’s not a bad idea. I know Microsoft is very well aware of the feedback and, you know, trust that they’re going to make some adjustments if need be.
Mary Jo Foley (04:32):
Right. Gotcha. Okay. I’m dying to ask you about Windows 365 and Azure Virtual Desktop, because I get a ton of questions about this. People are always trying to figure out, you know, which one of those two is the one for me. And I was curious if you have like a simple, relatively simple, rule of thumb when you’re telling people about their orgs or IT departments and which one of those two would be better suited to them?
Donna Ryan (05:01):
Absolutely. I guess the rule of thumb that I start out with when I’m having these conversations with my clients is do you already have an existing client virtualization solution and staff that manages it? If so, then AVD is a very easy adoption because it’s made, you know, client virtualization easier to deploy. If that answer is no and they don’t have any expertise, then I think Windows 365 makes more sense because you don’t have to really have any type of that underlying understanding of client virtualization fundamentals and those other components that go into it. There more technically, you know, like right now, if you need to have, you know, Nvidia GPU support, well, that’s not in Windows 365. But generally speaking, you know, if you have folks that already have that knowledge in-house and familiarity, you know, look at AVD, absolutely. But you know, if you just need, you know, quick, simple, easy with little training, Windows 365 is totally the way to go.
Mary Jo Foley (06:11):
Yeah. I had somebody else say to me, if you are comfortable with figuring out your Azure consumption needs and goals, then great, and you can go AVD. If you’re not, then you should stay away from AVD.
Donna Ryan (06:25):
I would generally agree with that, yeah.
Mary Jo Foley (06:26):
Okay, great. Let’s jump into some of the many listener and reader questions we got on Twitter. Scott on Twitter, here’s his question. It’s a little bit involved, but we can go through the acronyms as he brings them up, he said, I’m competent with DISM, Deployment, Image, Servicing and Management and WIMS, Windows Information, I don’t know.
Donna Ryan (06:56):
What is it? Oh, it’s Windows Image Media, I think is what that acronym stands for. The WIM file is what contains your installation of Windows that actually has, you know, the file table, the packages and all that stuff.
Mary Jo Foley (07:11):
Okay. So he knows DISM, he knows WIMS, and MDT, the Microsoft Deployment Toolkit for local imaging requirements, but he wants to take the next step to learn in cloud-based tools like Intune Autopilot and Endpoint Manager. So he said, what would you suggest I do to start? Cause I am a cloud newbie.
Donna Ryan (07:30):
Okay. Well being in the community and being on Twitter is a great first step because, you know, you’ve got plenty of the Microsoft staff PM’s are on there. Our community is very robust and very collaborative. More specifically for resources, there’s two that kind of come to mind off the top of my head at least for, you know, reading and going blog post wise, there’s a windows-noob.com, Niall Brady owns and maintains that. And actually when I started off in my career learning Configuration Manager, Windows-Noob was constantly pulled up. There are absolutely fantastic walk-throughs that cover every facet of that. There’s also a forum that’s on there as well for discussion. So, I mean, that would be a good one. And then Justin Chalfont’s site, I think it’s setupconfigmgr.com. Justin is the owner of Patch My PC and has done wonderful tutorials and videos on you know, everything MEM. So just at the top of my head, those would be two really good places outside of Twitter. When, you know, conferences start to open back up you know, absolutely hitting, you know, the MMS’s, or the user groups. Those are also great places cause we have, you know, industry experts there that are happy to answer questions and go off into those various rabbit holes.
Mary Jo Foley (09:10):
That’s great. Great resources. Okay. Mike Moss on Twitter says my team has not adopted SCCM, System Center Configuration Manager. He says they use Kace for PC, but we use Intune for mobile. So are there any helpers, you know of for swapping out Kace to the new MEM, Microsoft Endpoint Manager?
Donna Ryan (09:35):
So, I don’t generally work with Kace unless it’s helping folks migrate away from Kace to Configuration Manager. Which okay being biased, I’d say yes, always do that. But I did a little bit of poking around cause I saw the tweet. And I don’t see anything that, you know looks like there’s some type of, you know, tie in, into, you know, Intune from Kace I know with Configuration Manager, you can tie into third-party MDM solutions, that’s called coexistence, but I don’t think Intune plays well with any other type of on-prem management. And so my suggestion there would be, you know, maybe talk to, I don’t know who owns Kace now, I think Quest does. You know, talk to their rep to see if they’ve written anything or if they have any type of integration. Otherwise, you know, you can always move to Configuration Manager, but it doesn’t sound like your team’s ready to make that move. So about the best I can do on that one.
Mary Jo Foley (10:38):
Nope, that’s good. Marek, who goes by @technicalflow on Twitter, says what would be the best option for a small company hardware enrollment in the near future? Will WDS, Windows Deployment Services, plus MDT still be working in two to three years or will there be better and simpler options for an SMB to choose from?
Donna Ryan (11:02):
Well, that’s one of those crystal ball type questions. I don’t know any expected lifecycle. I would anticipate that WDS and MDT will still be around. Will they be actively developed? Probably not, but they’re likely to still be here for quite some time. And so there’s that. But are there better and simpler options to choose from? If you own, you know, the Intune licensing and the requisites, you know, we could look at Autopilot. If you need, you know, quick and simple, you know, you’ve got the Azure AD only joined style of Autopilot. You don’t necessarily have to be partnered with anybody to do your device registration. You’ve got PowerShell and modules that you could, you know, upload your own. There’s also some community tools built around that, OSD Cloud, Dave Segura’s latest offering to the community that allows you to kind of merge some of the best parts of imaging with the best parts of Autopilot. So, where the industry is going to be in two to three years? Yeah, if I knew that one yeah, I’d be a millionaire.
Mary Jo Foley (12:25):
Yeah. Those prediction ones are tough, right. Because there’s so many different things going on. Well, you know, how, what will a pandemic look like in two to three years? Will people be working from home and remotely as much as they are now? There’s just so many variables and it’s just pretty much impossible to know how much Windows, you know, kind of what’s even gonna happen with Windows in two to three years.
Donna Ryan (12:45):
Oh, absolutely. Yeah, looking at it with Windows 365, you know, us on the outside, had no idea this thing existed and that’s becoming, you know, an option for organizations that are getting hit by the chip shortage. Instead of having to go buy new and inflated priced machines, you can roll out Windows 365. So maybe the answer is in two to three years, we’re all just using iPads and phones to connect into our Windows instances and then paying a monthly reoccurring. Who knows?
Mary Jo Foley (13:14):
Not iPads. I hope not iPads, I’m not an Apple fan.
Donna Ryan (13:17):
Nor am I.
Mary Jo Foley (13:17):
Okay. Another one Chris Gahlsdorf on Twitter said, do you think it’s worth migrating from MDT and config manager to Autopilot for hybrid environments? So pretty much in line with what we’re talking about here, using Configuration Manager, Intune cloud attach.
Donna Ryan (13:41):
So, there’s a couple of things to unpack in that line of questioning. So is it generally worth it? The way I look at it., it depends on what your needs are and what you’re trying to satisfy. You know, if you’re a hundred percent on premises you know, do you need Autopilot, and you have Configuration Manager, no, because you have everything there. Are you looking at, you know, going to a hundred percent pure work from home solution, then quite possibly. You know, Configuration Manager does have the ability to perform imaging tasks over the internet via Cloud Management Gateway. So that could fill that, it really, you know, comes down to, you know, does this, will this tool do what you need it to do? Now, that being said, the second part of that, you know, is looking at using, you know, ConfigMan, and Intune, and cloud attach.
Donna Ryan (14:35):
I would absolutely encourage them to try Autopilot. Even if their answer is, you know Configuration Manager works better for us. It’s good to know how that tool works, where it’s, you know, its shortcomings are and where it really shines because there are scenarios where Autopilot is just fantastic. You know, in the hybrid environment, if we’re looking at Active Directory, you’re doing the hybrid Azure AD Join. Yeah, that’s not the easiest to maintain in that type of configuration because of the client VPNs that are required to talk to domain controllers. There’s lots of moving parts to that. If the goal is to eventually move to pure AAD, then yeah Autopilot for sure. Cause Autopilot and AAD Join is really cool. On the cloud attached side, you know, if you’ve got the licensing, which, you know, they own ConfigMan, they largely own Intune and vice versa.
Donna Ryan (15:35):
Is it worth to do that? Yeah, absolutely. You already own the licensing. You’ve got the parts and that value add that the team added with Tenant Attach really is pretty cool. It used to be that, you know, with Co-Management, it was ConfigMan, does this, Intune does that. And that’s all Co-Management really gave us. And that was good enough. But yeah, with the rapid development that comes into Intune and the fact that, you know, it gets updated monthly versus, you know, three times a year with Configuration Manager, the team is able to push down certain new functions and capabilities via you know, the tenant attach mechanism. You know, you get additional features like your Endpoint Analytics and Proactive Remediation and being able to leverage the power of the Configuration Manager agent to pull off actions in Intune. It really is, you know, taking the two very good tools and putting them together to make even better tools. So if you’ve got those, there is no real downside to you know, enabling that.
Mary Jo Foley (16:40):
Cool. You know, I am remiss in not asking you this right at the start of the chat, but maybe, could you give us a quick couple sentence definition of Autopilot because I don’t know that everybody knows what this is. And I’ve been hearing from my context that this is selling like gangbusters right now.
Donna Ryan (16:58):
So Autopilot is a way to provision devices over the internet. And what does that mean? It’s sometimes referred to incorrectly as like cloud imaging. The difference between an Autopilot and imaging is with Autopilot you’re using the Windows image that’s already on the computer. What’s neat about that is you don’t have to, you know, you’re not just schlepping around WIM files and drivers because that’s already, it’s already on the computer. It allows you to, you can deploy applications, your configurations to these devices and it just works over the internet. So it is easier to provision. It lacks some of the tight controls of order for operations like you have with Configuration Manager. But you are absolutely right that it is selling like gangbusters. I mean, once that was announced, you know, we started getting requests on that, you know, almost from day one and that’s still a good chunk of what we’re doing on my team at CDW.
Mary Jo Foley (18:10):
Interesting. Interesting. All right, now here’s a fun question. I don’t know the answer to part two, but I’m curious about it. Christian Lehrer on Twitter said, please ask Donna about WIMwitch, and I hear this was a tool that you yourself created. And then he said also ask her about 3D printers and I’m curious.
Donna Ryan (18:35):
And so you’re right, WIMwitch, it’s my community tool. And what she does is performs offline image servicing to WIM files. So what you can do is, you know, apply updates to an offline or to the WIM file. And then when you install the WIM file on the PC, Windows is already patched. But she does more than that too. She can handle language packs, and features on demands, registry keys, works Autopilot for existing devices. I’ve got her working with Configuration Manager. I wrote a console extension. She does lots of stuff, but the cool thing is you can control all of this from, from a GUI, which historically any of like the community tool solutions that were out they were all command lined, which they work great. But, you know, for some of us that really are visual doing some of these more advanced functions were a little bit challenging.
Donna Ryan (19:38):
So that was the goal that I sought to solve, that and make, you know, Star Trek jokes in there. Cause they’re the main action button there is called “Make It So”. I love Star Trek.
Mary Jo Foley (19:48):
Nice.
Donna Ryan (19:48):
But on the 3D printer side. So yeah, most of my Twitter feed, I think probably a good 25% is me tweeting about my printers. So I’ve got six of them. I can say at this point, yes, I love my 3D printers. I primarily have been printing ships from Star Trek.
Mary Jo Foley (20:09):
Oh wow.
Donna Ryan (20:09):
Which is fun. I’ve started hanging them from my ceiling in the basement. I’ve also got a TARDIS there too, cause you know, that’s a spaceship. I got some Star Wars stuff there too, but yeah, yeah 3D printers is, I’m a huge fan of 3D printing.
Mary Jo Foley (20:27):
Oh, that’s cool. Do you sell that too or no? Or you just do it for yourself?
Donna Ryan (20:31):
I primarily do it for myself. I lack a good skill set when it comes to CAD and design. And so most of these things that you can find online to print for free, have a license that either you can’t sell them or you give attribution. But I’m in it more just to make the machines run because they’re super fun to tinker with, you know.
Mary Jo Foley (20:57):
Nice. I was trying to figure out if there was some weird connection between 3D printing and like Endpoint Manager or something.
Donna Ryan (21:04):
Well, I did print off some clippies, but I think that’s probably about the extent of it.
Mary Jo Foley (21:11):
Okay. One last question from Twitter here, Alex Mags, what are the current options for rebuilding machines at remote sites without local file shares and distribution points. Peer-to-peer options?
Donna Ryan (21:27):
So that’s going to largely depend on what kind of toolset you’re going to use. Assuming that there’s Configuration Manager, you can, if they’ve built out the Cloud Management Gateway, which can distribute content. He could assign the cloud DP to provide content to that remote site. Peer-to-peer absolutely works. I’ve done that numerous times. There’s BranchCache which is fantastic. You know, if we’re not looking at Configuration Manager, then you know, the Cloud Management Gateway thing goes out the window. You know, you could do Autopilot because that’s not site-dependent. You could do I guess MDT standalone would work. Or if you wanted to go with, you know, like an OSD cloud type of type of option that would function as well. But yeah, and if they, again, if they have ConfigMan, I’d encourage them to go look at you know, like a cloud DP. They could also pair that up with, you know, oh is it, Two Point software has a community tool that allows BranchCache to work in Windows PE. So you could lessen to that amount of content coming down over there, you know WIM files are big, you know they’re five gigs and change. So peer-to-peer cloud solutions. Yes.
Mary Jo Foley (22:54):
Nice. All right. My last question for you is there seemed to be a lot of news at Ignite around Endpoint Manager, Intune, Config Manager. Were there any things that you saw or were kind of keeping tabs on from the recent Ignite conference that you want to kind of put on people’s radar, who were curious about the space?
Donna Ryan (23:21):
Yeah, I was kind of more paying attention to the Windows 365 offerings. We now have, you know, the pure Azure AD Join option. Cause when that came out it was hybrid. And maybe personally I got excited about that, cause I’ve been keeping an Azure lab and then have had the domain controllers have a site-to-site VPN, which isn’t free, so I can break that dependency. But yeah, mostly, it was more focused on the Windows 365 stuff on Intune. There is the announcement that Remote Control from Intune is coming, which is yay. It’s an additional cost. Boo. So there was some mixed emotions around that type of announcement, but yeah.
Mary Jo Foley (24:13):
Yeah, yep. There was a lot of Windows, 365 excitement at Ignite. I saw a lot of people tweeting about that, so, yeah. All right. Well, I wanted to say thank you so much for doing this chat with me and helping answer all these good listener questions.
Donna Ryan (24:29):
Absolutely, it’s my pleasure.
Mary Jo Foley (24:31):
I wanted to also let people know where they can find WIMwitch. If they’re interested in checking out your tool, what’s the best way for them to do that?
Donna Ryan (24:40):
You can just Google it. You’ll probably end up finding a link. I’m part of a fantastic and dare I say, intelligent and good-looking group of IT professionals, consultants, and MVPs at MSEndpointMgr.com. You know, if you go browse over there, go to Tools, you can go to WIMwitch, you can also find it at the URL, msendpointmgr.com/wim-witch, all the instructions are there. I’ve got docs, blog posts. Worst case you know, just ping me on Twitter. Happy to talk about that topic ad nauseum.
Mary Jo Foley (25:19):
Great. Well, thanks again for doing this today.
Donna Ryan (25:22):
You’re welcome.
Mary Jo Foley (25:22):
Thanks. For everyone else who’s listening right now or reading the transcript of this chat, I’ll be posting more soon about who my next guest after Donna is going to be. And once you see that you can submit questions directly on Twitter using the #MJFChat. In the meantime, if you know of anyone else or even yourself who might make a good guest for one of these chats, please do not hesitate to drop me a note. Thank you very much.
Businesses in Europe can reduce energy use by nearly 80 per cent when they run their applications on the AWS Cloud instead of operating their own datacentres, research commissioned by AWS has found.
The research, carried out by 451 Research, found that migrating compute workloads to AWS across Europe could decrease greenhouse gas emissions equal to the footprint of millions of households.
It also claims that a 1-megawatt corporate datacentre switching its applications to the cloud could reduce emissions by over a thousand metric tons of carbon dioxide per year – the equivalent to removing over 500 cars from the roads.
“We were struck by how much opportunity there is for European businesses to increase energy efficiency and reduce emissions by looking at their IT infrastructure,” Kelly Morgan, research director, datacentre infrastructure & services at 451 Research, part of S&P Global Market Intelligence.
“If you think of the electricity consumed and emissions produced by tens of thousands of companies across Europe operating their own datacentres, this is an area that appears to be overlooked.
“According to our analysis, moving workloads to the AWS Cloud could dramatically reduce the carbon footprint of most organisations’ IT operations.”
The study surveyed senior stakeholders at over 300 companies using their own datacentres across a broad range of industries and states that companies could further reduce carbon emissions from an average workload by up to 96 per cent once AWS meets its goal to be powered by 100 per cent renewable energy by the year 2025.
Cloud servers are roughly three times more energy efficient, and AWS datacentres are up to five times more energy efficient, than the computing resources of the average European company, 451 Research also claims.
“This report shows the great potential that cloud offers businesses in Europe to improve energy efficiency while cutting costs and carbon emissions at the same time,” said Chris Wellise, director of sustainability at AWS.
“AWS is continuously working on ways to increase the energy efficiency of facilities and equipment, as well as innovating the design and manufacture of servers, storage, and networking equipment to reduce resource use and limit waste.”
After the basic authentication deprecation announcement, Microsoft introduced the EXO V2 module to connect Exchange Online PowerShell with modern authentication. Even though the EXO V2 module uses modern auth, it still needs WinRM basic auth to transport modern auth tokens. If the basic auth is disabled in the local machine, the admin will get the following error.
New-ExoPSSession : Connecting to remote server outlook.office365.com failed with the following error message : The WinRM client cannot process the request. Basic authentication is currently disabled in the client configuration.
Now You Can Use EXO V2 Module More Secure:
Recently, Microsoft introduced EXO V2 Module Preview, which allows admins to connect Exchange Online without enabling WinRM basic authentication.
How it works: When you use the preview module, Connect-ExchangeOnline invokes REST API in the background, which doesn’t require WinRM basic auth.
Let’s see how to install EXO V2 Preview Module and disable WinRM basic authentication.
Install EXO V2 Preview Module:
To install the EXO V2 Preview module, run the following cmdlet,
To check whether the basic authentication is enabled, run the below command in the command prompt.
winrm get winrm/config/client/auth
If Basic= true set, you need to run the following command to disable WinRM basic auth.
winrm set winrm/config/client/auth @{Basic="false"}
After executing above command, the output looks similar to below screenshot.
Note: Only 229 EXO cmdlets have been converted to use REST API in this version. If you disable WinRM basic authentication, you can access only 229 EXO cmdlets; other RPS cmdlets will not work without WinRM basic authentication.
To use all the cmdlets via a Remote PowerShell connection, you need to pass the UseRPSSession parameter while running Connect-ExchangeOnline.
Connect-ExchangeOnline –UseRPSSession
Overall, this is a good start, but most admins feel disappointed as all the Exo cmdlets are not converted to use Rest API. How do you feel about this update? Share your thoughts through the comment section.
I joined Cloudflare a few weeks ago, and as someone new to the company, there’s a ton of information to absorb. I have always learned best by doing, so I decided to use Cloudflare like a brand-new user. Cloudflare customers range from individuals with a simple website to companies in the Fortune 100. I’m currently exploring Cloudflare from the perspective of the individual, so I signed up for a free account and logged into the dashboard. Just like getting into a new car, I want to turn all the dials and push all the buttons. I looked for things that would be fun and easy to do and would deliver some immediate value. Now I want to share the best ones with you.
Here are my five ways to get started with Cloudflare. These should be easy for anyone, and they’re free. You’ll likely even save some money and improve your privacy and security in the process. Let’s go!
1. Transfer or register a domain with Cloudflare Registrar
If you’re like me, you’ve acquired a few (dozen) Internet domains for things like personalizing your email address, a web page for your nature photography hobby, or maybe a side business. You probably registered them at one or more of the popular domain name registrars, and you pay around $15 per year for each domain. I did an audit and found I was spending a shocking amount each year to maintain my domains, and they were spread across three different registrars.
Cloudflare makes it easy to transfer domains from other registrars and doesn’t charge a markup for domain registrar services. Let me say that again; there is zero price markup for domain registration with Cloudflare Registrar. You’ll pay exactly what Cloudflare pays. For example, a .com domain registered with Cloudflare currently costs half of what I was paying at other registrars.
Not only will you save on the domain registration, but Cloudflare doesn’t nickel-and-dime you like registrars who charge extra for WHOIS privacy and transfer lock and then sneakily bundle their website hosting services. It all adds up.
To get started registering or transferring a domain, log into the Cloudflare Dashboard, click “Add a Site,” and bring your domains to Cloudflare.
2. Configure DNS on Cloudflare DNS
DNS servers do the work of translating hostnames into IP addresses. To put a domain name to use on the Internet, you can create DNS records to point to your website and email provider. Every time someone wants to put a website or Internet application online, this process must happen so the rest of us can find it. Cloudflare’s DNS dashboard makes it simple to configure DNS records. For transfers, Cloudflare will even copy records from your existing DNS service to prevent any disruption.
The Cloudflare DNS dashboard will also improve security on your domains with DNSSEC, protect your domains from email spoofing with DMARC, and enforce other DNS best practices.
I’ve now moved all my domains to Cloudflare DNS, which is a big win for me for security and simplicity. I can see them all in one place, and I’m more confident with the increased level of control and protection I have for my domains.
3. Set up a blog with Cloudflare Pages
Once I moved my domains, I was eager to set up a new website. I have been thinking lately it would be fun to have a place to post my photos where they can stand out and won’t get lost in the stream of social media. It’s been a while since I’ve built a website from scratch, but it’s fun getting back to basics. In the old days, to host a website you’d set up a dedicated web server or use a shared web host to serve your site. Today, many web hosts provide ready-to-go templates for websites and make hosting as easy as one click to set up a new site.
I wanted to learn by doing, so I took the do-it-yourself route. What I discovered in the process is an architecture called Jamstack. It’s a bit different from the traditional way of building and hosting websites. With Jamstack, your site doesn’t live at a traditional hosting provider, nor is it dynamically generated from CGI scripts and a database. Your content is now stored on a code repository like GitHub. The site is pre-generated as a static site and then deployed and delivered directly from Cloudflare’s network.
I used a Jamstack static site generator called Hugo to build my photo blog, pushed it to GitHub, and used Cloudflare Pages to generate the content and host my site. Now that it’s configured, there’s zero work necessary to maintain it. Jamstack, combined with Pages, alleviates the regular updates required to keep up with security patches, and there are no web servers or database services to break. Delivered from Cloudflare’s edge network, the site scales effortlessly, and it’s blazingly fast from a user perspective.
By the way, you don’t need to register a domain to deploy to Pages. Cloudflare will generate a pages.dev site that you can use.
For extra credit, have a look at the Cloudflare Workers serverless platform. Workers will allow you to write and deploy even more advanced custom code and run it across Cloudflare’s globally distributed network.
4. Protect your network with Cloudflare for Teams
At first, it wasn’t evident to me how I was going to use Cloudflare for Teams. I initially thought it was only for larger organizations. After all, I’m sitting here in my home office, and I’m just a team of one. Digging into the product more, it became clear that Teams is about privacy and security for groups of any size.
We’ve discussed the impressive Cloudflare DNS infrastructure, and you can take advantage of the Cloudflare DNS resolver for your devices at home by simply configuring them to point to Cloudflare 1.1.1.1 DNS servers. But for more granular control and detailed logging, you should try the DNS infrastructure built into the Cloudflare for Teams Gateway feature.
When you point your home network to Cloudflare for Teams DNS servers, your dashboard will populate with logs of all DNS requests coming from your network. You can set up rules to block DNS requests for various categories, including known malware, phishing, adult sites, and other questionable content. You’ll see the logs instantly and can add or remove categories as needed. If you trigger one of the rules, Cloudflare will display a page that shows you’ve hit one of these blocked sites.
Malware can bypass DNS, so filtering DNS is no silver bullet. Think of DNS filtering as another layer of defense that may help you avoid nefarious sites in the first place. For example, known phishing sites sent as URLs via email won’t resolve and will be blocked before they affect you. Additionally, DNS logs should give you visibility into what’s happening on the network and that may lead you to implement even better security in other areas.
There’s so much more to Cloudflare for Teams than DNS filtering, but I wanted to give you just a little taste of what you can do with it quickly and for free.
5. Secure your traffic with the Cloudflare 1.1.1.1 app and WARP
Finally, let’s discuss the challenge of securing Internet communications on your mobile phones, tablets, and devices at home and while traveling. We know that the SSL/TLS encryption on secure websites provides a degree of protection, but the apps you use and sites you visit are still visible to your ISP and upstream network operators. Some providers sell this data or use it to target you with ads.
If you install the 1.1.1.1 app, Cloudflare will create an always-on, encrypted tunnel from your device to the nearest Cloudflare data center and secure your Internet traffic. We call this Cloudflare WARP. WARP not only encrypts your traffic but can even help accelerate it by routing intelligently across the Cloudflare network.
WARP is a compelling VPN replacement without the risks associated with some shady VPN providers who may also want to sell your data. Remember, Cloudflare will never sell your data!
The Cloudflare WARP client combined with Cloudflare for Teams gives you enhanced visibility into DNS queries and unlocks some advanced traffic management and filtering capabilities. And it’s all free for small teams.
Hopefully, my exploration of the Cloudflare product portfolio gives you some ideas of what you can do to make your life a little easier or your team more secure. I’m just scratching the surface, and I’m excited to keep learning what’s possible with Cloudflare. I’ll continue to share what I learn, and I encourage you to experiment with some of these capabilities yourself and let me know how it goes.
Thought I’d share this tip for those that aren’t aware. Found this feature in Windows 10 about a year ago and it’s been a true game changer – use it all day, every day. Enjoy!
Edit: Yes, as multiple people replied, this can be a security vulnerability depending on what you’re copying and pasting. Like everything in life, gauge the risk in your scenario and use or don’t use it accordingly.
Use Windows+V instead of CTRL+V to paste in Windows 10/11, it allows you to select from items you’ve recently copied instead of only the last one. Game changer!
Microsoft’s famous developer video platform Channel 9 will be ending soon, and its spiritual successor will join the Microsoft Learn brand. https://ift.tt/3wqe0Ij According to a blog post on the Docs Team blog from corporate vp of developer relations Jeff Sandquist … Read more
The content below is taken from the original ( Service Directory cheat sheet), to continue reading please visit the site. Remember to respect the Author & Copyright.
Most enterprises have a large number of heterogeneous services deployed across different clouds and on-premises environments. It is complex to look up, publish, and connect these services, but it is necessary to do so for deployment velocity, security, and scalability. That’s where Service Directory comes in!
Service Directory is a fully managed platform for discovering, publishing, and connecting services, regardless of the environment. It provides real-time information about all your services in a single place, enabling you to perform service inventory management at scale, whether you have a few service endpoints or thousands.
Why Service Directory?
Imagine that you are building a simple API and that your code needs to call some other application. When endpoint information remains static, you can hard-code these locations into your code or store them in a small configuration file. However, with microservices and multi-cloud, this problem becomes much harder to handle as instances, services, and environments can all change.
Service Directory solves this! Each service instance is registered with Service Directory, where it is immediately reflected in Domain Name System (DNS) and can be queried by using HTTP/gRPC regardless of its implementation and environment. You can create a universal service name that works across environments, make services available over DNS, and apply access controls to services based on network, project, and IAM roles of service accounts.
Service Directory solves the following problems:
Interoperability: Service Directory is a universal naming service that works across Google Cloud, multi-cloud, and on-premises. You can migrate services between these environments and still use the same service name to register and resolve endpoints.
Service management: Service Directory is a managed service. Your organization does not have to worry about the high availability, redundancy, scaling, or maintenance concerns of maintaining your own service registry.
Access control: With Service Directory, you can control who can register and resolve your services using IAM. Assign Service Directory roles to teams, service accounts, and organizations.
Limitations of pure DNS: DNS resolvers can be unreliable in terms of respecting TTLs and caching, cannot handle larger record sizes, and do not offer an easy way to serve metadata to users. In addition to DNS support, Service Directory offers HTTP and gRPC APIs to query and resolve services.
How Service Directory works with Load Balancer
Here’s how Service Directory works with Load Balancer:
In Service Directory, Load Balancer is registered as a provider of each service
The client performs a service lookup via Service Directory
Service Directory returns the Load Balancer address
The client makes a call to the service via Load Balancer.
Using Cloud DNS with Service Directory
Cloud DNS is a fast, scalable, and reliable DNS service running on Google’s infrastructure. In addition to public DNS zones, Cloud DNS also provides a managed internal DNS solution for private networks on Google Cloud. Private DNS zones enable you to internally name your virtual machine (VM) instances, load balancers, or other resources. DNS queries for those private DNS zones are restricted to your private networks. Here is how you can use Service Directory zones to make service names available using DNS lookups.
The endpoints are registered directly with Service Directory using the Service Directory API. This can be done for both Google Cloud and non-Google Cloud services.
To enable DNS requests, create a Service Directory zone in Cloud DNS that is associated with a Service Directory namespace.
Internal clients can resolve this service via DNS, HTTP, or gRPC. External clients (clients not on the private network) must use HTTP or gRPC to resolve service names.
For a more in-depth look into Service Directory check out this documentation.
The content below is taken from the original ( User guide becomes a free download), to continue reading please visit the site. Remember to respect the Author & Copyright.
As well as its availability as a printed tome, the user guide for the latest stable release of RISC OS 5 has now been released as a PDF that can… Read more »
The content below is taken from the original ( New: SoundVolumeCommandLine v1.00), to continue reading please visit the site. Remember to respect the Author & Copyright.
SoundVolumeCommandLine (svcl.exe) is a console application that allows you to do many actions related to sound volume from command-line, including – set sound volume of devices and applications, mute/unmute devices and applications, increase/decrease volume of devices and applications, set the volume level of specific channel, set the default render/capture device, get the current sound volume level of specific device, and more…
svcl.exe is the console version of the SoundVolumeView tool, and you can use all commands of SoundVolumeView in svcl.exe, with exactly the same syntax.
The content below is taken from the original ( RISC OS London Show Report 2021), to continue reading please visit the site. Remember to respect the Author & Copyright.
After an online version last year, the London Show was back in person (with masks) at the Feltham Hotel. Doors opened at 11am. It was great to be back in the usual venue after a 2 year break. You can see all the stands in our pictures and notes on the talks.
Here is what I saw on the floor.
Rougol as well as running the show, Rougol also had a stand with some software running and a demo of the new Pi Zero 2, running RISC OS out of the box. They also had a big display showing their previous and past meetings. The November meeting is the one where they will rebook the room at the pub, so hoping people will turn up in person.
RISC OS bits had their selection of cases, new EDOS releases, a lap dock running the Pi (providing a very nice portable system) and details on their new ITX cases. The FUtilities is not quite ready to ship
Archive magazine had the latest edition available to collect for subscribers and was also discussing his future plans for the magazine. We were also treated to an impromptu demo of DARIC.
Rock Software Soft has their selection of games for RISC OS and were also showing off the new game from Rick Murray. They also had some cute mini RISC PC cases for the Pi.
Cameron Cawley was demonstrating lots of games on his Pinebook. He is working on another port for ScummVM and some other games.
Amcog Games had a very halloween themed stand, with both Andy and Sophia dressed up for the occasion. They had their new Haunted house game available along with the Amcog Games back catalogue. Andy was also selling some charity classical music CDs to raise money for Asthma UK.
The Charity stand had a large selection of items to rifle through in search of a hidden gem.
BBC preservation project were all setup to try and copy any software from your old BBC or RISC OS disks. They also had some nice RISC OS equipment which had been used by BBC to produce Television shows.
RPCEmu had a new release out (0.94) (5.94) and were selling their charity CDs. They also were demoing some possible ideas for future releases including being about to have multiple versions running and a dynamic recompiled for better performance.
Orpheus were there to discuss their services (as a customer I can say having fast, reliable Internet has been a life-saver in the lockdowns). Richard was also showing off the latest version of Iris and a new application called Watermark, which has every option you could ever want for adding watermarks onto an image.
ROOL had their full range of hardware, software, books, teeshirts and even some free RISC OS Windows (not that Windows) stickers on their stand they were chatting about their ideas for the next 15 years of RISC OS.
MUG stand was canvassing ideas and support for its planned Virtual Midsummer show on Zoom.
Chris Hall had a large range of his hardware and software projects on how. His 4te was showing how to easily switch between RISC OS and Linux. One of his RISC OS boxes had tracked his journey up from Bristol which he was able to display in OSM.
Drag’n’Drop had the latest edition (Volume 11 now) or their magazine available. There was also a USB with all the previous editions for sale (a large repository of interesting material). The new Application programming book was available to buy.
Organizer had version 2.29b available and also canvassing ideas for new releases.
In July, we announced Inside Azure for IT, an online technical skilling resource designed for cloud professionals to transform their IT operations with Azure best practices and insights. The team here has been so inspired by how many of you have used Inside Azure for IT to connect and collaborate from virtually everywhere. Whether you engaged in the monthly, live ask-the-product-experts sessions or shared learnings from the new video series I host, the Azure team and I are thankful for your participation, feedback, and partnership.
We all know IT jobs are getting harder. You’ve had to make hybrid work possible—whether employees are local or spread across the world. This has meant embracing the cloud at an accelerated rate, and while doing this, you’ve had to work better with internal teams and developers to ensure security, availability, and consistent management.
I was recently talking with an IT director who felt, even though they’d done a great job of making it possible for employees to work from home or the office, their rush to do this left them with some outdated IT processes. We talked about some simple yet impactful steps they could take to improve their IT management and governance to make things more adaptive and flexible. Today, we’re rolling out our second episode of Inside Azure for IT where we’ll explore some of that advice to give you tips you can use today, regardless of industry or size of business.
Take advantage of technical and cloud-skilling resources
We’ve broken the episode into three “snackable” segments, so you can jump between topics based on what advice you need at the moment.
Segment one: Connect your workforce using Microsoft 365 and Azure
Tara Roth, Corporate Vice President of Customer Success Engineering, joins me to talk about securely enabling users to work virtually from anywhere and on any device. More than just ensuring new employees are set up for the first time, we discuss building resilience and agility with tools needed to help people stay running.
Segment two: Manage hybrid and distributed IT environments at scale
As you’ll observe throughout these conversations, small steps can lead to big results when it comes to empowering people, infrastructure processes, and investments. Applying these learnings can help you overcome challenges at every step of remote software development and IT management.
Stay current with Inside Azure for IT
There are many more technical and cloud-skilling resources available through Inside Azure for IT. Learn more about empowering an adaptive IT environment with best practices and resources designed to enable productivity, digital transformation, and innovation. Take advantage of technical training videos and learn about implementing these scenarios with Inside Azure for IT.