Filtering with PowerShell Where-Object: Easy Examples

The content below is taken from the original ( Filtering with PowerShell Where-Object: Easy Examples), to continue reading please visit the site. Remember to respect the Author & Copyright.

In this article, I’ll explain how to use the PowerShell Where-Object cmdlet to filter objects and data. I’ll provide a series of easy examples showing you how to filter files by name or date, how to filter processes by status or CPU usage, and more.

When using PowerShell, you will often receive an extremely large amount of data when querying your environment. For example, If you run the Get-AzureADUser cmdlet against an Azure Active Directory database with 100,000 users, you will get…well, 100,000 results. That may take some time to output to your console!

Normally you won’t need to get all that information. The Where-Object cmdlet is an extremely helpful tool that will allow you to filter your results to pinpoint exactly the information you’re looking for.

What is the PowerShell Where-Object command?

PowerShell Where-Object is by far the most often-used tool for filtering data. Mostly due to its power and, at the same time, simplicity. It selects objects from a collection based on their property values.

There are other cmdlets that allow you to filter data. The Select-Object cmdlet selects objects (!) or object properties. Select-String finds text in strings and files. They both are valuable and have their niche in your tool belt.

Here are some brief examples for you. Select-Object commands help in pinpointing specific pieces of information. This example returns objects that have the Name, ID, and working set (WS) properties of process objects.

Get-Process | Select-Object -Property ProcessName, Id, WS

This other example does a case-sensitive (not default) match of the text sent down the pipeline to the Select-String cmdlet.

'Hello', 'HELLO' | Select-String -Pattern 'HELLO' -CaseSensitive -SimpleMatch

‘Hello’, ‘HELLO’ | Select-String-Pattern ‘HELLO’ -CaseSensitive -SimpleMatch

How to filter an array of objects with PowerShell Where-Object

The task at hand is filtering a large pool of data into something more manageable. Thankfully, there are several methods we have to filter said data. Starting with PowerShell version 3.0, we can use script blocks and comparison operators, the latter being the more recent addition and the ‘preferred’ method.

With the Where-Object cmdlet, you’re constructing a condition that returns True or False. Depending on the result, it returns the pertinent output, or not.

Building filters with script blocks

Using script blocks in PowerShell goes back to the beginning. These components are used in countless places. Script blocks allow you to separate code via a filter and execute it in various places.

To use a script block as a filter, you use the FilterScript parameter. I’ll show you an example shortly. If the script block returns a value other than False (or null), it will be considered True. If not, False.

Let’s show this via an example: You have been assigned a task from your manager to determine all the services on a computer that are set to Disabled.

We will first gather all the services with Get-Service cmdlet. This pulls all the attributes of all the services on the computer in question. Using the PowerShell pipeline, we can then ‘pipe’ the gathered results using the FilterScript parameter. We can use this example below to find all the services set to Disabled.

($_.StartType -EQ 'Disabled')

First off, if we just use the Get-Service cmdlet, we get the full list of services. And there were quite a few more screens of services beyond the image below.

We used the Get-Service cmdlet to show all services running on Windows 11
There are a LOT of services on Windows 11…

Not exactly what we’re looking for. Once we have the script block, we pass it right on to the FilterScript parameter.

We can see this all come to fruition with this example. We are using the Get-Service cmdlet to gather all the disabled services on our computer.

Get-Service | Where-Object -FilterScript ($_.StartType -EQ 'Disabled')
We used the Get-Service cmdlet to gather all the disabled services on our computer.
That’s better. Our list of disabled services.

There we go. Now, we have the 15 services that are set to Disabled, satisfying our request.

Filtering objects with comparison operators

The issue with the prior method is it makes the code more difficult to understand. It’s not the easiest syntax for beginners to get ramped up with PowerShell. Because of this ‘learning curve’ issue, the engineers behind PowerShell produced comparison statements.

These have more of a flow with them. We can produce some more elegant, efficient, and ‘easier-to-read’ code using our prior example.

Get-Service | Where-Object -Property StartType -EQ 'Disabled'

See? A little more elegant and easier to read. Using the Property parameter and the eq operator as a parameter allows us to also pass the value of Disabled to it. This eliminates our need to use the script block completely!

Containment operators

Containment operators are useful when working with collections. These allow you to define a condition. There are several examples of containment operators we can use. Here are a few:

  • -contains – Filter a collection containing a property value.
  • -notcontains – Filter a collection that does not contain a property value.
  • -in – Value is in a collection, returns property value if a match is found.
  • -notin – Value is not in a collection.

For case sensitivity, you can append ‘c’ at the beginning of the commands. For example, ‘-ccontains’ is the case-sensitive command for filtering a collection containing a property value.

Equality operators

There are a good number of equality operators. Here are a few:

  • -eq / -ceq – Value equal to specified value / case-sensitive option.
  • -ne – Value not equal to specified value.
  • -gt – Value greater than specified value.
  • -ge – Value greater than or equal to specified value.
  • -lt – Value less than specified value.
  • -le – value less than or equal to specified value.

Matching operators

We also have matching operators to use. These allow you to match strings inside of other strings, so that ‘Windows World Wide’ -like ‘World*’ returns a True output.

Here are some examples of matching operators:

  • -like – String matches a wildcard-type pattern
  • -notlike – String does NOT match a wildcard pattern
  • -match – String matches regex pattern
  • -notmatch – String does NOT match regex pattern

You use these just like when using containment operators.

Can you use multiple filter conditions with both methods?

Come to think of it, yes, you certainly can use both methods in your scripts. Even though comparison operators are more modern, there are times when working with more complex filtering requirements will dictate you to use script blocks. You’ll be able to find the balance yourself as you learn and become more proficient with your scripts.

Filtering with PowerShell Where-Object: Easy Examples

Let’s go through some simple examples of using the Where-Object cmdlet to determine pieces of information. Eventually, we’ll be able to accomplish tasks with ease.

Filtering files by name

We can certainly filter a directory of files that match specific criteria. We can use the Get-ChildItem cmdlet to first gather the list of files in my Downloads folder. Then, I use the Where-Object cmdlet with the ‘BaseName‘ parameter to find all files that have ‘Mail’ in the filenames.

We can use also wildcard characters here. Let’s give it a whirl:

Get-ChildItem -path 'c:\users\sovit\downloads' | Where-Object {$_.BaseName -match 'Mail*'}
Using the PowerShell Where-Object cmdlet to filter files in a folder matching a specific filename wildcard.
Filtering files in a folder matching a specific filename wildcard.

Piece of cake. So, imagine a scenario where you have a folder with 25,000 files in it, and all the filenames are just strings of alphanumeric characters. Being able to quickly find the file(s) with an exact character match is ideal and a HUGEtimesaver!

Filtering files by date

We can use the same commands, Get-ChildItem and Where-Object, to find files based on dates, too. Let’s say we want to find all files that were created or updated in the last week. Let’s do this!

Get-ChildItem | Where-Object {$_.LastWriteTime -gt (Get-Date).AddDays(-7)}
Using the PowerShell Where-Object cmdlet to filter files in a directory by last saved time - wonderful tool!
Filtering files in a directory by last saved time – wonderful tool!

We are using the LastWriteTime property and the Get-Date and AddDays parameters to make this work. It works wonderfully.

Filtering processes by name

Because it is SO much fun working with Windows Services, let’s continue in this lovely realm. We are trying to determine the name of the ‘WWW’ service. We can use the ‘Property‘ parameter again.

Get-Service | Where-Object -Property Name -Contains 'W3SVC'
Using the PowerShell Where-Object cmdlet to locate the www publishing service
Locating the WWW Publishing Service

Filtering processes by status

There are several properties with each service, so we can also use a containment operator to gather a list of all services that are in a Running state.

Get-Service | Where-Object -Property Status -Contains 'Running'
Using the PowerShell Where-Object cmdlet to list all running services
Listing all Running Services

Filtering processes by name and status

Remember what I said about script blocks? Let’s use one here to accomplish to filter processes by name and status. We will get all the services that are running but also have a StartType parameter set to Manual. Here we go!

Get-Service | Where-Object {($_.Status -contains 'Running') -and ($_.StartType -in 'Manual')}
We used the Where-Object cmdlet with a script block to filter services by status and StartTypr
Filtering Services by Status and StartType

Pretty slick. And we’re just starting here…

Filtering processes by CPU usage

You can also use equality operators with Where-Object to compare values. Here, we’ll use an operator and the Get-Process command to filter all running processes on our computer based on CPU usage.

Let’s use a script block to find all processes that are using between 4 and 8 percent of the CPU.

get-process | Where-Object {($_.CPU -gt 4.0) -and ($_.CPU -lt 8)}
Displaying all Processed using between 4 and 8% CPU time
Displaying all Processed using between 4 and 8% CPU time

Here is an example that also helps us find all the local services that have ‘Windows’ in their DisplayName.

Get-Service | Where-Object {$_.DisplayName -match 'Windows'}
Showing all Services that have 'Windows' in their DisplayName parameter
Showing all Services that have ‘Windows’ in their DisplayName parameter

In addition, we can also use a wildcard character to find the Nameof all services that start with ‘Win’.

Get-Service | Where-Object {($_.Name -like 'Win*')}
All the Services that have 'Win' starting their Name parameter
All the Services that have ‘Win’ starting their Name parameter

Finding PowerShell commands with a specific name

Our cmdlet also lets you use logical operators to link together multiple expressions. You can evaluate multiple conditions in one script block. Here are some examples.

  • -and – The script block evaluates True if the expressions are both logically evaluated as True
  • -or – The block evaluates to True when one of the expressions on either side are True
  • -xor – The script block evaluates to True when one of the expressions is True and the other is False.
  • -not or ‘!’ – Negates the script element following it.

Let me show you an example that illustrates this concept.

get-command | Where-Object {($_.Name -like '*import*') -and ($_.CommandType -eq 'cmdlet')}
Showing all the commands locally that have 'Import' in their name and are PowerShell cmdlets
Showing all the commands locally that have ‘Import’ in their name and are PowerShell cmdlets

Very useful!

Finding files of a specific type with a specific size

You’ve already seen a few examples of the ‘-filter‘ command above. This is the main example of using filter parameters in your commands and scripts. It lets you home in on the precise data you’re looking for. Let me show you an example.

Get-ChildItem -Path c:\users\sovit\Downloads -filter *.pdf | where-Object {*_.Length -ge 150000}
Viewing all PDF files 150K or larger!

This command filters out all the files in the folder for PDF files. It then pipes that to the Where-Object cmdlet, which will further narrow the list down to PDF files that are 150K or larger. Very useful!

Conclusion

The ‘Where-Object’ cmdlet is very powerful in helping you quickly pinpoint exactly the data points you are looking for. Being able to check all Services that are set to Automatic yet are not Running can be extremely helpful during troubleshooting episodes. And using it to find errant, high-CPU processes in a programmatic way can also help you with scripting these types of needs.

If you have any comments or questions, please let me know down below. Thank you for reading!

Announcing Chocolatey Central Management 0.10.0

The content below is taken from the original ( Announcing Chocolatey Central Management 0.10.0), to continue reading please visit the site. Remember to respect the Author & Copyright.

We recently released our largest update to Chocolatey Central Management so far. Join Gary to find out more about Chocolatey Central Management and the new features and fixes we’ve added to this release.

Chocolatey Central Management provides real-time insights, and deployments of both Chocolatey packages and PowerShell code, to your client and server devices.

Find out more about Chocolatey for Business and Chocolatey Central Management at https://chocolatey.org/products/chocolatey-for-business

submitted by /u/pauby to r/chocolatey
[link] [comments]

How to map useful commands to your keyboard on Windows 11 or Windows 10

The content below is taken from the original ( How to map useful commands to your keyboard on Windows 11 or Windows 10), to continue reading please visit the site. Remember to respect the Author & Copyright.

map useful commandsDid you know you can map useful commands to your keyboard? Follow this guide to learn how.

How Einride scaled with serverless and re-architected the freight industry

The content below is taken from the original ( How Einride scaled with serverless and re-architected the freight industry), to continue reading please visit the site. Remember to respect the Author & Copyright.

Industry after industry being transformed by software. It started with industries such as music, film and finance, whose assets lent themselves to being easily digitized. Fast forward to today, and we see a push to transform industries that have more physical hardware and require more human interaction, for example healthcare, agriculture and freight. It’s harder to digitize these industries – but it’s arguably more important. At Einride, we’re doing just that. 

Our mission is to make Earth a better place through intelligent movement, building a global autonomous and electric freight network that has zero dependence on fossil fuel. A big part of this is Einride Saga, the software platform that we’ve built on Google Cloud. But transforming the freight industry is a formidable technical task that goes far beyond software. Still, observing the software transformations of other industries has shown us a powerful way forward.

So, what lessons have we learned from observing the industries that led the charge?

1 Einride Pod.gif
The Einride Pod, an autonomous, all-electric freight vehicle designed and developed by Einride – here shown in pilot operations at GEA Appliance Park in Louisville, KY.

Lessons from re-architecting software systems

Most of today’s successful software platforms started in co-located data centers, eventually moving into the public cloud, where engineers could focus more on product and less on compute infrastructure. Shifting to the cloud was done using a lift-and-shift approach: one-to-one replacements of machines in datacenters with VMs in the cloud. This way, the systems didn’t require re-architecting, but it was also incredibly inefficient and wasteful. Applications running on dedicated VMs often had, at best, 20% utilization. The other 80% was wasted energy and resources. Since then, we’ve learned that there are better ways to do it.

Just as the advent of shipping containers opened up the entire planet for trade by simplifying and standardizing shipping cargo, containers have simplified and standardized shipping software. With containers, we can leave management of VMs to container orchestration systems like Kubernetes, an incredibly powerful tool that can manage any containerized application. But that power comes at the cost of complexity, often requiring dedicated infrastructure teams to manage clusters and reduce cognitive load for developers. That is a barrier of entry to new tech companies starting up in new industries — and that is where serverless comes in. Serverless offerings like Cloud Run abstract away cluster management and make building scalable systems simple for startups and established tech companies alike.

Serverless isn’t a fit for all applications, of course. While almost any application can be containerized, not all applications can make use of serverless. It’s an architecture paradigm that must be considered from the start. Chances are, an application designed with a VM-focused mindset won’t be fully stateless, and this prevents it from successfully running on a serverless platform. Adopting a serverless paradigm for an existing system can be challenging and will often require redesign.

Even so, the lessons from industries that digitized early are many: by abstracting away resource management, we can achieve higher utilization and more efficient systems. When resource management is centralized, we can apply algorithms like bin packing, and we can ensure that our workloads are efficiently allocated and dynamically re-allocated to keep our systems running optimally. With centralization comes added complexity, and the serveless paradigm enables us to shift complexity away from developers, as well as from entire companies.

Opportunities in re-architecting freight systems

At Einride, we have taken the lessons from software architecture and applied them to how we architect our freight systems. For example, the now familiar “lift-and-shift” approach is frequently applied in the industry for the deployment of electric trucks – but attempts at one-to-one replacements of diesel trucks lead to massive underutilization.

With our software platform, Einride Saga, we address underutilization by applying serverless patterns to freight, abstracting away complexity from end-customers and centralizing management of resources using algorithms. With this approach, we have been able to achieve near-optimal utilization of the electric trucks, chargers and trailers that we manage. 

But to get these benefits, transport networks need to be re-architected. Flows in the network need to be reworked to support electric hardware and more dynamic planning, meaning that shippers will need to focus more on specifying demand and constraints, and less on planning out each shipment by themselves.

We have also found patterns in the freight industry that influence how we build our software. Managing electric trucks has made us aware of the differences in availability of clean energy across the globe, because – much like electric trucks – Einride Saga relies on clean energy to operate in a sustainable way. With Google Cloud, we can run the platform on renewable energy, worldwide.

The core concepts of serverless architecture — raising the abstraction level, and centralizing resource management — have the potential to revolutionize the freight industry. Einride’s success has sprung from an ability to realize ideas and then quickly bring them to market. Speed is everything, and the Saga platform – created without legacy in Google Cloud – has enabled us to design from the ground up and leverage the benefits of serverless.

Advantages of a serverless architecture

Einride’s architecture supports a company that combines multiple groundbreaking technologies — digital, electric and autonomous — into a transformational end-to-end freight service. The company culture is built on transparency and inclusivity, with digital communication and collaboration enabled by the Google Workspace suite. The technology culture promotes shared mastery of a few strategically selected technologies, enabling developers to move seamlessly up and down the tech stack — from autonomous vehicle to cloud platform.

If a modern autonomous vehicle is a data center on wheels, then Go and gRPC are fuels that make our vehicle services and cloud services run. We initially started building our cloud services in GKE, but when Google Cloud announced gRPC support for Cloud Run (in September 2019), we immediately saw the potential to simplify our deployment setup, spend less time on cluster management, and increase the scalability of our services. At the time, we were still very much in startup mode, making Cloud Run’s lower operating costs a welcome bonus. When we migrated from GKE to Cloud Run and shut down our Kubernetes clusters, we even got a phone call from our reseller who noticed that our total spend had dropped dramatically. That’s when we knew we had stumbled on game-changing technology!

2 Einride serverless architecture.jpg
Einride serverless architecture showing a gRPC-based microservice platform, built on Cloud Run and the full suite of Google Cloud serverless products

In Identity Platform, we found the building blocks we needed for our Customer Identity and Access Management system. The seamless integration with Cloud Endpoints and ESPv2 enabled us to deploy serverless API gateways that took care of end-user authentication and provided transcoding from HTTP to gRPC. This enabled us to get the performance and security benefits of using gRPC in our backends, while keeping things simple with a standard HTTP stack in our frontends.

For CI/CD, we adopted Cloud Build, which gave all our developers access to powerful build infrastructure without having to maintain our own build servers. With Go as our language for backend services, ko was an obvious choice for packaging our services into containers. We have found this to be an excellent tool for achieving both high security and performance, providing fast builds of distro-less containers with an SBOM generated by default.

One of our challenges to date has been to provide seamless and fully integrated operations tooling for our SREs. At Einride, we apply the SRE-without-SRE approach: engineers who develop a service also operate it. When you wake up in the middle of the night to handle an alert, you need the best possible tooling available to diagnose the problem. That’s why we decided to leverage the full Cloud Operations suite, giving our SREs access to logging, monitoring, tracing, and even application profiling. The challenge has been to build this into each and every backend service in a consistent way. For that, we developed the Cloud Runner SDK for Go – a library that automatically configures the integrations and even fills in some of the gaps in the default Cloud Run monitoring, ensuring we have all four golden signals available for gRPC services.

For storage, we found that the Go library ecosystem around Cloud Spanner provided us with the best end-to-end development experience. We chose Spanner for its ease of use and low management overhead – including managed backups, which we were able to automate with relative ease using Cloud Scheduler. Building our applications on top of Spanner has provided high availability for our applications, as well as high trust for our customers and investors.

Using protocol buffers to create schemas for our data has allowed us to build a data lake on top of BigQuery, since our raw data is strongly typed. We even developed an open-source library to simplify storing and loading protocol buffers in BigQuery. To populate our data lake, we stream data from our applications and trucks via Pub/Sub. In most cases, we have been able to keep our ELT pipelines simple by loading data through stateless event handlers on Cloud Run.

The list of serverless technologies we’ve leveraged at Einride goes on, and keeping track of them is a challenge of its own – especially for new developers joining the team who don’t have the historical context of technologies we’ve already assessed. We built our tech radar tool to curate and document how we develop our backend services, and perform regular reviews to ensure we stay on top of new technologies and updated features.

3 Einride’s backend tech radar.jpg
Einride’s backend tech radar, a tool used by Einride to curate and document their serverless tech stack.

But the journey is far from over. We are constantly evolving our tech stack and experimenting with new technologies on our tech radar. Our future goals include increasing our software supply chain security and building a fully serverless data mesh. We are currently investigating how to leverage ko and Cloud Build to achieve SLSA level 2 assurance in our build pipelines and how to incorporate Dataplex in our serverless data mesh.

A freight industry reimagined with serverless

For Einride, being at the cutting edge of adopting new serverless technologies has paid off. It’s what’s enabled us to grow from a startup to a company scaling globally without any investment into building our own infrastructure teams.

Industry after industry is being transformed by software, including complex industries that have more physical hardware and require more human interaction. To succeed, we must learn from the industries that came before us, recognize the patterns, and apply the most successful solutions. 

In our case, it has been possible not just by building our own platform with a serverless architecture, but also by taking the core ideas of serverless and applying them to the freight industry as a whole.

log file

The content below is taken from the original ( log file), to continue reading please visit the site. Remember to respect the Author & Copyright.

I’m currently using write-host to print variables in the console.

Is it possible to save the output to a log file? How it is done ?

submitted by /u/anacondaonline to r/PowerShell
[link] [comments]

Convert a low-resolution logo to a high-resolution vector graphic in Photoshop

The content below is taken from the original ( Convert a low-resolution logo to a high-resolution vector graphic in Photoshop), to continue reading please visit the site. Remember to respect the Author & Copyright.

Photoshop is one of the top graphic software on the market. Photoshop has surprising capabilities that professionals and hobbyists enjoy. You can convert a low-resolution logo to a high-resolution vector graphic in Photoshop. Photoshop is supposed to be for raster color but here is another surprise, it can also do some amount of vector graphic. […]

This article Convert a low-resolution logo to a high-resolution vector graphic in Photoshop first appeared on TheWindowsClub.com.

Microsoft Teams to Let Admins Deploy Up To 500 Teams Using Templates and PowerShell

The content below is taken from the original ( Microsoft Teams to Let Admins Deploy Up To 500 Teams Using Templates and PowerShell), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft Teams is getting a new update that will enable IT admins to deploy and manage teams at scale. Microsoft has announced in a message on the Microsoft 365 admin center that administrators will be able to create up to 500 teams with built-in or custom templates via Teams PowerShell cmdlet.

Specifically, Microsoft Teams will allow IT Pros to add up to 25 users to teams as members or owners. The upcoming update will also make it possible to add or remove members from existing teams. Moreover, admins will be able to send email notifications about the deployment status of each batch to up to 5 people.

Microsoft Teams’ new feature will make team management easier for IT admins

According to Microsoft, the ability to create and manage large numbers of teams at a time should help to significantly reduce deployment time. It will also make it easier for organizations to meet the specific scalability needs of their organization.

“Your organization may have a lot of teams that you use to drive communication and collaboration among your frontline workforce, who are spread across different stores, locations, and roles. Currently, there isn’t an easy solution to deploy, set up, and manage these teams and users at scale,” the company explained on the Microsoft 365 admin center.

Microsoft notes that this feature is currently under development, and it will become available for Microsoft Teams users in preview by mid-September. However, keep in mind that the timeline is subject to change.

Microsoft is also introducing a feature that will let users start group chats with the members of a distribution list, mail-enabled security group, or Microsoft 365 groups in Teams. Microsoft believes that this release will help to improve communication and boost the workflow efficiency of employees. You can check out our previous post for more details.

ebay Palos

The content below is taken from the original ( ebay Palos), to continue reading please visit the site. Remember to respect the Author & Copyright.

Given all the license restrictions ebay refurbished Palos carry what can one do to athleast use them as learning devices? Thanks

submitted by /u/zoolabus to r/paloaltonetworks
[link] [comments]

First Impressions of The RISC OS Developments Network stack

The content below is taken from the original ( First Impressions of The RISC OS Developments Network stack), to continue reading please visit the site. Remember to respect the Author & Copyright.

RISC OS Developments have been working away on their new TCP/IP stack for some time now and it is available to download from their website. So it seemed high time for TIB to wander over and have a look.

Installing the software
The software is available as a zip download.

I would recommend reading the !!Read_Me_First text file (which also tells you how to remove from your system). The Reporting Document tells you how to report any bugs you might find. Features gives you a nice overview and a clear idea of the objectives with this software.

When you are ready to try, Double-click on !Install and follow the prompts, rebooting your machine.

In use
The first indication that things have changed is that you have new options with Interfaces menu compared to previously.

You will also find that it has thoughtfully backed up your old version, just in case…

First impressions
I do not have an IP6 setup so my main interest was in updating my existing setup (and being generally nosy). For IP4, this is a drop in replacement. Everything works as before (feels subjectively faster) and it all works fine. Like all the best updates, it is very boring (it just works). RISC OS Developments have done an excellent job of making it all painless. While the software is still technically in beta, I have no issues running on my main RISC OS machine.

What is really exciting is the potential this software opens up of having a maintained and modern TCP/IP stack with support for modern protocols, TCP/IP 6 and proper wifi support.

RISC OS Developments website.

No comments in forum

Connect to Exchange Online PowerShell with MFA

The content below is taken from the original ( Connect to Exchange Online PowerShell with MFA), to continue reading please visit the site. Remember to respect the Author & Copyright.

As per MC407050, Microsoft is going to retire the “Connect to Exchange Online PowerShell with MFA module” (i.e., EXO V1 module) on Dec 31, 2022. And the support ends on Aug 31, 2022. So, admins should move to EXO V2 module to connect to Exchange Online PowerShell with multi-factor authentication. 

 

Why We Should Switch from EXO V1 Module? 

Admins should install the Exchange Online remote PowerShell module and use the PowerShell cmdlet Connect-EXOPSSession to connect to Exchange Online PowerShell with MFA. The module uses basic authentication to connect to EXO. Due to basic authentication deprecation, Microsoft has introduced the EXO V2 module with improved security and data retrieval speed. 

 

Connect to Exchange Online PowerShell with MFA: 

To connect to Exchange Online PowerShell with MFA, you need to install the Exchange Online PowerShell V2 module. With this module, you can create a PowerShell session with both MFA and non-MFA accounts using the Connect-ExchangeOnline cmdlet. 

Additionally, the Exchange Online PowerShell V2 module uses modern authentication and helps to create unattended scripts to automate the Exchange Online tasks. 

To download and install the EXO V2 module & connect to Exchange Online PowerShell, you can use the script below. 

#Check for EXO v2 module installation
$Module = Get-Module ExchangeOnlineManagement -ListAvailable
if($Module.count -eq 0)
{
 Write-Host Exchange Online PowerShell V2 module is not available -ForegroundColor yellow
 $Confirm= Read-Host Are you sure you want to install module? [Y] Yes [N] No
 if($Confirm -match "[yY]")
 {
 Write-host "Installing Exchange Online PowerShell module"
 Install-Module ExchangeOnlineManagement -Repository PSGallery -AllowClobber -Force
 Import-Module ExchangeOnlineManagement
 }
 else
 {
 Write-Host EXO V2 module is required to connect Exchange Online. Please install module using Install-Module ExchangeOnlineManagement cmdlet.
 Exit
 }
}

Write-Host Connecting to Exchange Online...
Connect-ExchangeOnline

 

If you have already installed the EXO V2 module, you can use the “Connect-ExchangeOnline” cmdlet directly to create a PowerShell session with MFA and non-MFA accounts. For MFA accounts, it will prompt for additional authentication. After the verification, you can access Exchange Online data and Microsoft 365 audit logs. 

 

Advantages of Using EXO V2 Module: 

  • It uses modern authentication to connect to Exchange Online PowerShell. 
  • A single cmdlet “Connect-ExchangeOnline” is used to connect to EXO with both MFA and non-MFA accounts. 
  • It doesn’t require WinRM basic authentication to be enabled. 
  • Helps to automate EXO PowerShell login with MFA. i.e., unattended scripts. 
  • Contains REST API based cmdlets. 
  • Provides exclusive cmdlets that are optimized for bulk data retrieval. 

 

If you are using the Exchange Online Remote PowerShell module, it’s time to switch to the EXO V2 module. Also, you can update your existing scripts to adopt the EXO V2 module. Happy Scripting! 

The post Connect to Exchange Online PowerShell with MFA  appeared first on Office 365 Reports.

OK guys I’m from Colorado so gotta ask???? how do any that do off grid setups are preventing against battery discharge and fire associated with it…

The content below is taken from the original ( OK guys I’m from Colorado so gotta ask???? how do any that do off grid setups are preventing against battery discharge and fire associated with it…), to continue reading please visit the site. Remember to respect the Author & Copyright.

OK yall so I gotta ask as wild fires are scary as shit and they claim alot of my state as well as others each year… that stated, any that do off grid setups have you thought of anti fire precautions?

Below I have outlined a simple 10 dollar (does 100s of hotspots for 15 bucks lol)

OK ingredients per hotspot: -Bromochlorodifluoromethane powder> about 5 to 8 grams per Hotspot needed (1kilo is like 5 bucks max on alibaba)

-1 water Ballon> or any Ballon with extremely thin rubber on the walls just need it to pop very very easy

  • one small firecracker firework> the kind you get in a pack of 40 and it says ‘caution: don’t unpack the thing light as one’ or what ever it says but we all do it anyways cuz we are not as smart of creatures as we think we are 😆

-firework wick> you will use like 2 or 3 foot give or take per hotspot (a 10m rolls like 3 or 4 bucks I think almost anywhere that sells it)

-1 tiny sized rubber hair band per hotspot> the Lil Lil tiny ones that are like the size of a pinky finger round (a packs 50 cents at family dollar or use o-rings if you are a tool guy and have lil tiny o-rings)

-You need a balloon for when you fill the "homemade fire snuffer bomb" (patent pending) jkjk 😆

-you need a few straw for filling as well (and a small cardboard that can crease but stay ridged will help but can be done with out)

-peel apart bread ties or a roll of sone other very thin wire

OK so see we got about 15$USD of materials here, what your gunna do is…

1) put on gloves and at least a dust particulate 3m mask and some rubber gloves!

PSA of the day! –>(your bodies worth it and that powder is not really supposed to be huffed no matter the intent to or accidental …..cancer don’t care. Cancer happens either way if your adult dont be lazy get in the car or on the bicycle or long board like us millennials do so frequently it seems…. just go buy a pair of 99cent kitchen dish gloves at family dollar, and if you a kid then steal moms from under the sink for 20 mins it wont destroy them just rinse and return)

Now….

2) you are going to take said Bromochlorodifluoromethane powder and put around 7-10 grams in a ballon (this is where the carboard folded in a square angle crease comesin handy but if you skiped it take the drinking straw and use it to scoop a few grams at a time into the balloon)

Then…

3) take the pack of fire crackers apart and strip them into single firecracker units.

4) take 3ish foot of wick off the roll and then attach it to the wick of one fire cracker (overlap the two wicks so that the overlap about a inch then use a piece of wire to wrap around it In a spirl fashion from the fire cracker to the 3ish foot piece secure it as best you can so it will make good contact if triggered to activate save the day for some random forrest or nieghborhood or where ever it be!)

Now set that aside and return to the powder filled balloon…

5) get a hair band ready and have the extra balloon *** make sure its a powder free clean balloon*** I mentioned that it was for the help in filling in the list at the start!

6) blow into the clean Balloon enough so it’s the size off a orange or a apple or baseball or what ever you want to picture it as that’s that size… (***Big Tip: I use tape, after I blow in it, so I can put on a table edge and duck tape the thing to the table so it stays filled but can be removed and let deflate when ready😉)

OK on to…

7) place firecracker in the powder balloon shove it in the middle so that it is completely surrounded by the powder, but leave the long wick hanging out the balloon!

8)**** put a clean straw**** in the powder balloon (you dont want it contaminated on both ends if you used for filling with it!) Then put the hairband looped multiple times on its self (objective is: to have the band wrapped enough so once on the balloon neck and you pull the straw out you have a tightly sealed water balloon this is so it holds a slight amount of air)

9) take the clean air filled balloon from [step 6)] and then put on the opposing end of the straw snug as you can without to much air loss… squeeze air into the the powder Balloon from the air Balloon (if it is hard to get that much in due to loss when putting it on straw then fill the air balloon more and go round 2)

Finally….

10) when powder balloon is inflated slightly think like tennis ball ish sized and you feel it feel like a balloon when you squish lightly on it (not like a hackie sack more one of those bags of air that products get shipped with the ones that are a string of like seven bags of air…. I digress you get the picture) pull the straw when you are happy and you now have a homemade class D fire extinguisher….

Place that bad boy inside the housing of the miner and battery set up and drape the wick so it is laying all around the inside the enclosure you use.

Battery vents>wick gets lit by the battery igniting>firecracker goes #pop>powder from previously inflated balloon fills box> Battery fire fixed in a matter of 5 or less seconds from start of fire

That said if your yseing a big housing enclosure plan accordingly by your specs in size of enclosure… if it’s a big space maybe put 2 fire snuffer bombs in each be smart the nature your putting it in will prefer to not have a fire filled future in the event its a needed precaution!

There ya go!

thanks for the read If you made it to the end. That’s my solution lmk what yall did if you had a good solution as well! I’d love to hear it 😀

submitted by /u/Bryan2966s to r/HeliumNetwork
[link] [comments]

RISCOSbits announces their RISC OS rewards scheme

The content below is taken from the original ( RISCOSbits announces their RISC OS rewards scheme), to continue reading please visit the site. Remember to respect the Author & Copyright.

RISCOSbits have announced their new RISC OS rewards scheme. The idea is to reward loyal RISC OS users by offering them discounts on new RISC OS hardware.

To start with, they are offering anyone who can show they have purchased an Ovation Pro licence a 10% discount on any PiHard systems. We have previously reviewed the PiHard (which is now my main RISC OS machine at work and what I am typing this on).

This offer is also open to any existing RISCOSbits customers, who can also claim 10% on anew system.

To claim your discount, you should contact RISCOSbits directly.

There will be additional special offers. If you are on twitter, watch out for the hashtag #RISC_OS_Rewards

RISCOSbits website

No comments in forum

Validating and Improving the RTO and RPO Using AWS Resilience Hub

The content below is taken from the original ( Validating and Improving the RTO and RPO Using AWS Resilience Hub), to continue reading please visit the site. Remember to respect the Author & Copyright.

“Everything fails, all the time”, a famous quote from Werner Vogels, Vogles, VP and CTO of Amazon.com. When you design and build an application, a typical goal is to have it working, the next is to keep it running, no matter what disruptions may occur. It is crucial to achieve resiliency, but you need to consider how to define it first and which metric to use to determine your application’s resiliency against. Resiliency can be defined in terms of metrics called RTO (Recovery Time Objective) and RPO (Recovery Point Objective). RTO is a measure of how quickly can your application recover after an outage and RPO is a measure of the maximum amount of data loss that your application can tolerate.

To learn more on how to establish RTO and RPO for your application, please refer  “Establishing “Establishing RTO and RPO Targets for cloud applications” applications”

AWS Resilience Hubis a new service launched in November 2021. This service is designed to help you define, validate and track the resilience of your applications on the AWS cloud.

You can define the resilience policiesfor your applications. These policies include RTO and RPO targets for applications, infrastructure, availability zone, and region disruptions. Resilience Hub’s  assessment uses best practices from the AWS Well-Architected Framework.  It will analyze the components of an application such as compute, storage, database and
network and uncover potential resilience weaknesses.

In this blog we will show you how Resilience Hub can help you validate RTO and RPO at component level for four types of disruptions, which in-turn can help you improve the resiliency of your entire application stack.

  1. Customer Application RTO and RPO
  2. AWS Infrastructure RTO and RPO
  3. Cloud Infrastructure Availability Zone (AZ) disruption
  4. AWS Region disruption
  5. outage

  6. AWS Infrastructure Region outage

Customer Application RTO and RPO

Application outages occur when the infrastructure stack (hardware) is healthy but the application stack (software) is not. This outage may be caused by configuration changes, bad code deployments, integration failures, etc. Determining RTO and RPO for application stacks depends on the criticality and importance of the application, as well as your compliance requirements. For example, mission critical application could have an RTO and RPO of 5 minutes

Example: Your critical business application is hosted from Amazon Simple Storage Service (Amazon S3)bucket and you set it up without cross region replication and versioning. Figure 1 shows that application RTO and RPO are unrecoverable based on a target of 5 minutes RTO and RPO

Figure 1. Resilience Hub assessment of the Amazon S3 bucket against Application RTO

Figure 1. Resilience Hub assessment of the Amazon S3 bucket against Application RTO

After running the assessment, Resilience Hub provides recommendation to enable versioning on Amazon S3 bucket as shown in Figure 2.

Figure 2. Resilience recommendation for Amazon S3

After enabling the versioning, you can achieve the estimated RTO of 5m and RPO RTO of 0s. Versioning allows you to preserve, retrieve, and restore any version of any object stored in a bucket improving your application resiliency.

Resilience Hub also provides the cost associated with implementing the recommendations. In this case, there is no cost for enabling versioning on Amazon S3 bucket. Normal S3 pricing applies to each version of an object. You can store any number of versions of the same object, so you may want to implement some expiration and deletion logic if you plan to make use of versioning.

Resilience Hub can provide one or more than one recommendation to satisfy the requirements such as cost, high availability and least changes. As shown in Figure 2, adding versioning for S3 bucket satisfies both high availability optimization and best attainable architecture with least changes.

Cloud Infrastructure RTO and RPO

Cloud Infrastructure outage occurs when the underlying components for infrastructure, such as hardware fail. Consider a scenario where a partial outage occurred because of a component failure.

For example, one of the components in your mission critical application is anAmazon Elastic Container Service (ECS)running on Elastic Compute Cloud (EC2) instance, and your targeted infrastructure RTO  and RPO of 1 second .Figure 3 shows that you are unable to meet your targeted infrastructure RTO of 1 second

Figure 3. Resilience Hub assessment of the ECS application against Infrastructure

Figure 4 shows that Resilience Hub’s recommendation to add  AWS Auto scaling Groups and Amazon ECS Capacity Providers in multiple AZs.

Figure 4. Resilience Hub recommendation for ECS cluster - to add Auto Scaling Groups and Capacity providers in multiple AZs.

Figure 4. Resilience Hub recommendation for ECS cluster – to add Auto Scaling Groups and Capacity providers in multiple AZs.

AWS Auto Scaling monitors your applications and automatically adjusts capacity to maintain steady, predictable performance at the lowest possible cost. Amazon ECS capacity providers are used to manage the infrastructure the tasks in your clusters use. It can use AWS Auto Scaling groups to automatically manage the Amazon EC2 instances registered to their clusters. By applying the Resilience Hub recommendation, you will achieve an estimated RTO and RPO of near zero seconds and the estimated cost for the change is $16.98/month.

AWS Infrastructure Availability Zone (AZ) disruption outage

The AWS global infrastructure is built around AWS Regions and Availability Zone to achieve High Availability (HA). AWS Regions provide multiple physically separated and isolated Availability Zones, which are connected with low-latency, high-throughput, and highly redundant networking.

For example, you have setup a public NAT gateway in Single-AZ Single-Az to allow instances in a private subnet to send outbound traffic to the internet. You have deployed Amazon Elastic Compute Cloud (Amazon EC2)instances in multiple availability zones.

Figure 5. Resilience Hub assessment of the Single-Az NAT gateway

Figure 5 shows Availability Zone disruptions as unrecoverable and does not meet the Availability Zone RTO goals. NAT gateways are fully managed services, there is no hardware to manage, so they are resilient (0s RTO) for infrastructure failure. However, deploying only one NAT gateway in Single AZ Az leaves the architecture vulnerable. If the NAT gateway’s Availability Zone is down, resources deployed in other Availability Zones lose internet access.

Figure 6. Resilience Hub recommendation to create Availability Zone-independent NAT Gateway architecture.

Figure 6 shows Resilience Hub’s recommendation to deploy NAT Gateways into each Availability Zone where corresponding EC2 resources are located.

Following Resilience Hub’s recommendation, you can achieve the lowest possible RTO and RPO of 0 seconds in the event of an Availability Zone disruption and create an Availability Zone-independent architecture run on $32.94 per month.

NAT Gateway deployment in multiple Azs can achieve the lowest RTO/RPO for Availability Zone disruption, the lowest cost, and the minimal changes, so the recommendation is the same for all three options.

AWS Region disruption outage

An AWS Region consists of multiple, isolated, and physically separated AZs within a geographical area. This design achieves the greatest possible fault tolerance and stability. For a disaster event that includes the risk of disruption of multiple data centers or a regional service disruption, losing multiple data centers, it’s a best practice to consider multi-region disaster recovery strategy to mitigate against natural and technical disasters that can affect an entire Region within AWS. If one or more Regions or regional service that your workload uses are unavailable, this type of disruption outage can be resolved by switching to a secondary Region. It may be necessary to define a regional RTO and RPO if you have a Multi-Region dependent application.

For example, you have a Single-AZ  Single-Az Amazon RDS for MySQLas part of a global mission-critical application and you have configured 30 min RTO and 15 minute RPO for all four disruption types. Each RDS instance runs on an Amazon EC2instance backed by an Amazon Elastic Block Store (Amazon EBS)volume for storage. RDS takes daily snapshots of the database, which are stored durably in Amazon S3 behind the scenes. It also regularly copies transaction logs to S3—up to 5 min utes intervals—providing point-in time-recovery when needed.

If an underlying EC2 instance suffers a failure, RDS automatically tries to launch a new instance in the same Availability Zone, attach the EBS volume, and recover. In this scenario, RTO can vary from minutes to hours. The duration depends on the size of the database, and failure and recovery approach. RPO is zero in the case of recoverable instance failure because the EBS volume was recovered. If there is an Availability Zone disruption, you can create a new instance in a different Availability Zone using point-in-time recovery. Single-AZ Single-Az does not give you protections against regional disruption. distribution. Figure 7 shows that you are not able to meet regional RTO of 30 min and RPO of 15 mins.

Figure 7. Resilience Hub assessment for the Amazon RDS

Figure 8. Resilience Hub recommendation to achieve region level RTO and RPO

As shown in Figure 8, Resilience Hub provides you three recommendations to optimize in order to handle Availability Zone disruptions, be cost effective and to have minimal changes.

Recommendation 1 “Optimize for Availability Zone RTO/RPO”: The changes recommended under this option will help you achieve the lowest possible RTO and RPO in the event of an Availability Zone disruption. For a Single-AZ Single-Az RDS, Resilience Hub recommends to change the Database to Aurora and add two read replica same region to achieve targeted RTO and RPO for Availability Zone failure. It also recommends to add a read replica in different region to achieve resiliency for regional disruption. Estimated cost for these changes as shown in Figure 8 is $66.85 per month.

Amazon Auroraread replicas share the same data volume as the original database instance. Aurora handles the Availability Zone disruption by fully automating the failover with no data loss. Aurora creates highly available database cluster with synchronous replication across multiple AZs. This is considered to be the better option for production databases where data backupis a critical consideration.

Recommendation 2 “Optimize for cost”: These changes will optimize your application to reach the lowest cost that will still meet your targeted RTO and RPO. The recommendation here is to keep a Single-AZ Single-Az Amazon RDS and create the read replica in primary region with additional read replica in the secondary / different region. The estimated cost for these changes is $54.38 per month. You can promote a read replica to a standalone instance as a disaster recovery solution if the primary DB instance fails or unavailable during region disruption.

Recommendation 3 “Optimize for minimal changes”: These changes will help you to meet targeted RTO and RPO while keeping implementation changes to minimal. Resilience Hub recommends to create a  Multi-AZ Multi-Az writer and a Multi-AZ Multi-Az read replicain two different regions. Estimate cost for changes is $81.56 per month. When you provision a Multi-AZ Multi-Az Database instance, Amazon RDS automatically creates a primary Database instance and synchronously replicates the data to a standby instance in a different Availability Zone. In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby Database instance. Since the endpoint for your Database instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention

Although all three recommendations help you achieve a targeted application RTO and RPO of 30 mins, the estimated costs and efforts may vary.

Conclusion

To build a resilient workload, you need to have right best practices in place. In this post, we showed you how to improve the resiliency of your business application and achieve targeted RTO and RPO for application, infrastructure, Availability Zone, and Region disruptions using recommendations provided by Resilience Hub. To learn more and try the service by yourself, visit AWS Resilience Hubpage.

Authors:

Monika Shah

Monika Shah is a Technical Account Manager at AWS where she helps customers navigate through their cloud journey with a focus on financial and operational efficiency. She worked for Fortune 100 companies in the networking and telecommunications fields for over a decade. In addition to her Masters degree in Telecommunications, she holds industry certifications in networking and cloud computing. In her free time, she enjoys watching thriller and comedy TV shows, playing with children, and exploring different cuisines.

Divya Balineni

Divya is a Sr. Technical Account Manager with Amazon Web Services. She has over 10 years of experience managing, architecting, and helping customers with resilient architectures to meet their business continuity needs. Outside of work, she enjoys gardening and travel.

Rescuezilla 2.4 is here: Grab it before you need it

The content below is taken from the original ( Rescuezilla 2.4 is here: Grab it before you need it), to continue reading please visit the site. Remember to respect the Author & Copyright.

A fork of Redo Rescue that outdoes the original – and beats Clonezilla too

Version 2.4 of Rescuezilla – which describes itself as the “Swiss Army Knife of System Recovery,” – is here and based on Ubuntu 22.04.…

Use a PowerShell Substring to Search Inside a String

The content below is taken from the original ( Use a PowerShell Substring to Search Inside a String), to continue reading please visit the site. Remember to respect the Author & Copyright.

Need to search for a string inside a string? Never fear, PowerShell substring is here! In this article, I guide you through how to ditch objects and search inside strings.

The PowerShell substring

I love being on social media because I always come across something interesting related to PowerShell. Sometimes it is a trick I didn’t know about or a question someone is trying to figure out. I especially like the questions because it helps me improve my understanding of how people learn and use PowerShell. Workload permitting, I’m happy to jump in and help out.

One such recent challenge centered on string parsing. Although you’ll hear me go on and on about object in the pipeline, there’s nothing wrong with parsing strings if that’s what you need. There are plenty of log files out there that need parsing, and PowerShell can help.

Search for a string in a string

In this case, I’m assuming some sort of log file is in play. I don’t know what the entire log looks like or what the overall goal is. That’s OK. We can learn a lot from the immediate task. Let’s say you have a string that looks like this:

Mailbox:9WJKDFH-FS349-1DSDS-OIFODJFDO-7F21-FC1BF02EFE26 (O'Hicks, Jeffery(X.))

I’ve changed the values a little bit and modified my name to make it more challenging. The goal is to grab the name from the string. I want to end up with:

O'Hicks, Jeffery(X.)

There are several different ways you can accomplish this. The right way probably depends on your level of PowerShell experience and what else you might want to accomplish. I’ll start by assigning this string to variable $s.

Using the PowerShell split operator

When I am faced with string parsing, sometimes it helps to break the string down into more manageable components. To do that I can use the split operator. There is also a split method for the string class. I am going to assume you will take some time later to read more about PowerShell split.

$s -split "\s",2

The “\s” is a regular-expression pattern that means a space. The 2 parameter value indicates that I only want two substrings. In other words, split on the first space found.

Using the split operator in Windows PowerShell. (Image Credit: Jeff Hicks)

Using the split operator in Windows PowerShell. (Image Credit: Jeff Hicks)

I end up with an array of two elements. All I need is the second one.

$t = ($s -split "\s",2)[1]

Using the substring method

Now for the parsing fun, let’s use the string object’s SubString() method.

$t.Substring(1,$t.length-2)

I am telling PowerShell to get part of the string in $t, starting at character position 1 and then getting the next X number of characters. In this case, X is equal to the length of the string, $t, minus 2. This has the net effect of stripping off the outer parentheses.

Using the PowerShell substring method

Using the PowerShell substring method (Image Credit: Jeff Hicks)

Here’s a variation, where I split on the “(” character.

($s -split "\(",2)
$t = ($s -split "\(",2)[1]
$t.Substring(0,$t.length-1)

Using the split operator on a character in Windows PowerShell. (Image Credit: Jeff Hicks)

Using the split operator on a character in Windows PowerShell. (Image Credit: Jeff Hicks)

The difference that this gets rid of the leading parenthesis. So, all I need to do is get everything up to the last character. By the way, if you look at a string object with Get-Member, you will see some Trim methods. These are for removing leading and/or training white space. Those methods don’t apply here.

Split a string using array index numbers

There’s one more way to split a string that you might find useful. You can treat all strings as an array of characters. This means you can use array index numbers to reference specific array elements.

Counting elements in an array starts at 0. If you run $t[0] you’ll get the first element of the array, in this case ‘O’. You can also use the range operator.

$t[0..5]

An alternative to splitting a string in Windows PowerShell. (Image Credit: Jeff Hicks)

An alternative to splitting a string in Windows PowerShell. (Image Credit: Jeff Hicks)

Right now, $t has an extra ) at the end that I don’t want. I need to get everything up to the second-to-last element.

$t[0..($t.length-2)]

This will give me an array displayed vertically, which you can see in the screenshot above. With that said, it’s easy to join the spliced string back together.

-join $t[0..($t.length-2)]

It might look a bit funny to lead with an operator, but if you read about_join, then you’ll see this is a valid approach that works.

Using the -join operator to put our string back together. (Image Credit: Jeff Hicks)

Using the -join operator to put our string back together. (Image Credit: Jeff Hicks)

Simple function for string parsing

I’m assuming you want an easy way to do this type of parsing, so I wrote a simple function.

Function Optimize-String {
[cmdletbinding()]
Param(
[Parameter(Position=0,Mandatory,HelpMessage="Enter a string of text")]
[ValidateNotNullorEmpty()]
[string]$Text,
[int]$Start=0,
[int]$End=0
)
#trim off spaces
$string = $Text.Trim()
#get length value when starting at 0
$l = $string.Length-1
#get array elements and join them back into a string
-join $string[$Start..($l-$end)]
} #end function

The function takes a string of text and returns the substring minus X number of characters from the start and X number of characters from the end. Now I have a easy command to parse strings.

Using the optimize-string function in Windows PowerShell. (Image Credit: Jeff Hicks)

Using the optimize-string function in Windows PowerShell. (Image Credit: Jeff Hicks)

If you look through the code, then you’ll see I am using the Trim() method. I am using it because I don’t want any extra spaces at the beginning or end of the string to be included.

Going back to my original string variable, I can now parse it with a one-line command:

optimize-string ($s -split("\s",2))[1] -Start 1 -end 1

Using the optimize-string function to search inside a string

Using the optimize-string function to search inside a string (Image Credit: Jeff Hicks)

If it makes more sense to you to break this into separate steps, that’s OK.

$arr = $s -split("\s",2)
$text =  $arr[1]
optimize-string $text -Start 1 -end 1

Next time we’ll look at string parsing using regular expressions, and it won’t be scary, I promise.

Palo Alto Virtual Lab

The content below is taken from the original ( Palo Alto Virtual Lab), to continue reading please visit the site. Remember to respect the Author & Copyright.

I have seen several posts asking about virtual lab costs, etc…

When I saw my daily Fuel email it reminded me of those posts.

If you are looking to become certified or just want to learn more about PAs, I would recommend joining your local Fuel User Group Chapter. go to https://fuelusergroup.org and sign up, its free. Now that you have an account you can access resources and you will get a local rep to help you on your learning journey. One of the resources you gain is free access to a virtual lab.

https://www.fuelusergroup.org/page/fuel-virtual-lab

Fuel User Group is great. Meet all kinds of IT professionals, vendors, and Palo Reps. Hold on to those reps because they have access to Palo people that you and I do not, plus they may also know other professionals with the knowledge you’re looking for. I am looking forward to getting back to our in person meetings. In the meetings they bring great PA information and yes, they do have a sponsor doing a short pitch but honestly they fit the vibe of the meeting. Time is not wasted and the swag is great!

submitted by /u/Electronic_Front_549 to r/paloaltonetworks
[link] [comments]

RISCOSbits announces ‘RISC OS Rewards’ scheme

The content below is taken from the original ( RISCOSbits announces ‘RISC OS Rewards’ scheme), to continue reading please visit the site. Remember to respect the Author & Copyright.

Also, you may or may not already know, but Ovation Pro is now free to download. RISCOSbits has launched a new initiative aimed at rewarding loyal RISC OS users for their continuing patronage by offering a discount on future purchases. The ‘RISC OS Rewards’ scheme allows for a 10% discount against the purchase of any computer system available from the PiHard website, including the ‘Fourtify’ option that provides a way to upgrade an existing Raspberry Pi 4 system to one of those on offer. There are two main ways to…

Learn How to Switch to Modern Authentication in Office 365

The content below is taken from the original ( Learn How to Switch to Modern Authentication in Office 365), to continue reading please visit the site. Remember to respect the Author & Copyright.

Hey guys,

u/junecastillote just wrote a new blog post you may enjoy on the ATA blog.

"Learn How to Switch to Modern Authentication in Office 365"

Summary: Enhance your IT organizations security and capabilities by switching to modern authentication in Office 365 in this ATA Learning tutorial!

https://adamtheautomator.com/modern-authentication-in-office-365/

submitted by /u/adbertram to r/Cloud
[link] [comments]

New pfSense docs: Configuring pfSense Software for Online Gaming

The content below is taken from the original ( New pfSense docs: Configuring pfSense Software for Online Gaming), to continue reading please visit the site. Remember to respect the Author & Copyright.

As of July there are now pfSense gaming reference configurations for Xbox, Playstation, Nintendo Switch/Wii, Steam and Steam Deck, including the NAT and UPNP settings you will need to enable for optimal NAT types.

Learn more:

https://docs.netgate.com/pfsense/en/latest/recipes/games.html

submitted by /u/crashj to r/PFSENSE
[link] [comments]

Sonos Move wigging out while we were out of town. Any ideas?

The content below is taken from the original ( Sonos Move wigging out while we were out of town. Any ideas?), to continue reading please visit the site. Remember to respect the Author & Copyright.

Sonos Move wigging out while we were out of town. Any ideas? submitted by /u/DeLa_Sun to r/sonos
[link] [comments]

[Script sharing] Find all Office 365 Inbox rules that forwards emails to external users

The content below is taken from the original ( [Script sharing] Find all Office 365 Inbox rules that forwards emails to external users), to continue reading please visit the site. Remember to respect the Author & Copyright.

submitted by /u/Kathiey to r/usefulscripts
[link] [comments]

How to explain what an API is – and why they matter

The content below is taken from the original ( How to explain what an API is – and why they matter), to continue reading please visit the site. Remember to respect the Author & Copyright.

Some of us have used them for decades, some are seeing them for the first time on marketing slides

Systems Approach Explaining what an API is can be surprisingly difficult.…

New: CredHistView v1.00

The content below is taken from the original ( New: CredHistView v1.00), to continue reading please visit the site. Remember to respect the Author & Copyright.

Every time that you change the login password on your system, Windows stores the hashes of the previous password in the CREDHIST file (Located in %appdata%\Microsoft\Protect\CREDHIST ) This tool allows you to decrypt the CREDHIST file and view the SHA1 and NTLM hashes of all previous passwords you used on your system. In order to decrypt the file, you have to provide your latest login password. You can this tool to decrypt the CREDHIST file on your currently running system, as well as to decrypt the CREDHIST stored on external hard drive.

Microsoft Rolls Out Dynamic Administrative Units Support for Azure AD

The content below is taken from the original ( Microsoft Rolls Out Dynamic Administrative Units Support for Azure AD), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft has announced the public preview of dynamic administrative units with Azure Active Directory (Azure AD). The new feature lets organizations configure rules for adding or deleting users and devices in administrative units (AUs).

Azure AD administrative units launched in public preview back in 2020. The feature lets enterprise admins logically divide Azure AD into multiple administrative units. Specifically, an administrative unit is a container that can be used to delegate administrative permissions to a subset of users.

Microsoft Rolls Out Dynamic Administrative Units Support for Azure AD

Previously, IT Admins were able to manage the membership of administrative units in their organization manually. The new dynamic administrative units feature now enables IT Admins to specify a rule to automatically perform the addition or deletion of users and devices. However, this capability is currently not available for groups.

The firm also adds that all members of dynamic administrative units are required to have Azure AD Premium P1 licenses. This means that if a company has 1,000 end-users across all dynamic administrative units, it would need to purchase at least 1,000 Azure AD Premium P1 licenses.

“Using administrative units requires an Azure AD Premium P1 license for each administrative unit administrator, and an Azure AD Free license for each administrative unit member. If you are using dynamic membership rules for administrative units, each administrative unit member requires an Azure AD Premium P1 license,” Microsoft noted on a support page.

How to create dynamic membership rules in Azure AD

According to Microsoft, IT Admins can create rules for dynamic administrative units via Azure portal by following these steps:

  1. Select an administrative unit and click on the Properties tab.
  2. Set the Membership Type to Dynamic User or Dynamic Device and click the Add dynamic query option.
  3. Now, use the rule builder to create the dynamic membership rule and click the Save button.
  4. Finally, click the Save button on the Properties page to save the membership changes to the administrative unit.

Currently, the dynamic administrative units feature only supports one object type (either users or devices) in the same dynamic administrative unit. Microsoft adds that support for both users and devices is coming in future releases. You can head to the support documentation to learn more about dynamic administrative units.

How to Download a File using PowerShell

The content below is taken from the original ( How to Download a File using PowerShell), to continue reading please visit the site. Remember to respect the Author & Copyright.

PowerShell can download files from the Internet and your local network to your computer. Learn how to use PowerShell’s Invoke-WebRequest and Start-BitsTransfer cmdlets to download files here.

Welcome to another post on how PowerShell can assist you in your daily job duties and responsibilities. Being able to download files from the Internet and your local network with PowerShell is something I hadn’t really thought a lot about. But, just thinking about the power and scalability of PowerShell intrigues me to no end.

There are so many possibilities around scripts, downloading multiple files at the same time, auto extracting ZIP files, the list goes on and on. If you ever wanted to download the various Windows patching files from the Windows Update Catalog, you could script it if you have the exact URL.

While a bit tedious at first, you could definitely get your groove on after a little bit of tweaking and learning. But let’s first discuss prerequisites.

Prerequisites

They aren’t stringent. You just need PowerShell 5.1 or newer to use the commands in this post. Windows 10 and Windows 11 already include at least version 5.1. Windows Server 2012/R2 comes with version 4.0.

You can also simply download the latest and greatest by downloading PowerShell 7.2.x from this link. And, come to think of it, I’ll use this URL later in the article and show you how to download this file… once you have an appropriate version installed. 🙂

Use PowerShell to download a file from a local network source

Let me start by letting you know I’m utilizing my (Hyper-V) Windows Server 2022 Active Directory lab, again. I’ll be running these commands on my Windows 11 client machine.

First, let’s use the Copy-Item cmdlet to download a file from a local fileserver on my LAN. This command at a minimum just needs a source and destination. I have an ISO in my Downloads folder I need to put up on my G: drive. I’ll create two variables for the source folder and the destination folder.

$source = “c:\users\mreinders\downloads\”
$destination = “\\ws16-fs01-core\shares\folder_01\Extra\”

Then, I’ll run the command and include the -Recurse switch to copy the folder AND the ISO file inside it.

Copy-Item -path $source -destination $destination -Recurse
Using the PowerShell Copy-Item cmdlet to copy an ISO to a fileserver
Using the Copy-Item command to copy an ISO to a fileserver

As you can see, the ISO file was copied to the G: drive.

Use Powershell to download a file from the Internet

Next, let’s work on downloading files from the Internet. We can start with the Invoke-WebRequest cmdlet.

With the Invoke-WebRequest cmdlet 

As I said earlier, I can show you how to download the MSI file for the latest (as of this writing) PowerShell 7.2.2 (x64) version using Invoke-WebRequest. Again, let’s set up some variables first. We can use the general concept of a source variable and destination variable.

$url = “https://github.com/PowerShell/PowerShell/releases/download/v7.2.2/PowerShell-7.2.2-win-x64.msi
$dest = “c:\users\mreinders\downloads\Latest_Powershell.MSI”

Invoke-WebRequest -Uri $url -OutFile $dest
Using Invoke-WebRequest to download the latest PowerShell MSI installer from GitHub
Now, using Invoke-WebRequest to download the latest PowerShell MSI installer from GitHub

This 102 MB file took about 4 or 5 minutes to download, which is quite a bit longer than I would expect. That’s due to the inherent nature of this specific cmdlet. The file is buffered in memory first, then written to disk.

We downloaded the PowerShell MSI file to the Downloads folder
I downloaded the PowerShell MSI file to my Downloads folder

We can get around this inefficiency by using the Background Intelligence Transfer Service (BITS) in Windows. I’ll show you further below how to utilize all your bandwidth.

Cases when downloads require authentication

You will certainly come across files that require authentication before downloading. If this is the case, you can use the -Credential switch on Invoke-WebRequest to handle these downloads.

Let’s say there is a beta or private preview of an upcoming PowerShell version (7.3?) that requires authentication. You can utilize these commands (or create a PowerShell script) to download this hypothetical file.

# Variables
$url = "<a href="https://github.com/PowerShell/PowerShell/releases/download/v7.2.2/PowerShell-7.2.2-win-x64.msi">https://github.com/PowerShell/PowerShell/Preview/download/v7.3.0-Preview3/PowerShell-7.3.0-Preview3-win-x64.msi</a>"
$dest = "c:\users\mreinders\downloads\PowerShell-7.3.0-Preview3.MSI"

# Username and password
$username = 'mreinders'
$password = 'PleaseLetMeIn'

# Convert to a SecureString
$secPassword = ConvertTo-SecureString $password -AsPlainText -Force

# Create a Credential Object
$credObject = New-Object System.Management.Automation.PSCredential ($username, $secPassword)

# Download file
Invoke-WebRequest -Uri $url -OutFile $dest -Credential $credObject

Downloading and extracting .zip files automatically

Let’s see another example of how PowerShell can assist you with automation. We can use some more variables and a COM object to download a .ZIP file and then extract its contents to a location we specify. Let’s do this!

There’s a sample .ZIP file stored up on GitHub. We’ll store that in our $url variable. We’ll create another variable for our temporary ZIP file. Then, we’ll store the path to where the ZIP file will be extracted in a third variable.

$url = “https://ift.tt/S9zC5Dd;
$zipfile = “c:\users\mreinders\downloads\” + $(Split-Path -Path $Url -Leaf)
$extractpath = “c:\users\mreinders\downloads\Unzip”

Invoke-WebRequest -Uri $url -OutFile $zipfile
Using Invoke-WebRequest to download a ZIP file in preparation for extracting
Using Invoke-WebRequest to download a ZIP file in preparation for extracting

Now, let’s use the COM object to extract the ZIP file to our destination folder.

# Create the COM Object instance
$objShell = New-Object -ComObject Shell.Application

# Extract the Files from the ZIP file
$extractedFiles = $ObjShell.NameSpace($zipFile).Items()

# Copy the new extracted files to the destination folder
$ObjShell.NameSpace($extractPath).CopyHere($extractedFiles)
Using a COM object to extract the contents of the ZIP file
Using a COM object to extract the contents of the ZIP file

With the Start-BitsTransfer cmdlet

Now, let’s see if we can speed up file transfers with PowerShell. For that, we’ll utilize the aforementioned Background Intelligence Transfer Service. This is especially helpful as the BITS service lets you resume downloads after network or Internet interruptions.

We can use similar variables and see how long it takes to download our 102 MB PowerShell MSI installer:

$url = “https://github.com/PowerShell/PowerShell/releases/download/v7.2.2/PowerShell-7.2.2-win-x64.msi
$destination = “c:\users\mreinders\downloads\”

Start-BitsTransfer -Source $url -Destination $destination
We downloaded the MSI file in a flash using Background Intelligent Transfer Service (BITS)
We downloaded the MSI file in a flash using Background Intelligent Transfer Service (BITS)!

Ok, that went MUCH faster and finished in about 4 seconds. 🙂 The power of BITS!

Downloading multiple files with Start-BitsTransfer

To close out this post, let me show you how you can download multiple files with the Start-BitsTransfer cmdlet.

There are many websites that store sample data for many training and educational programs. I found one that includes a simple list of files – HTTP://speed.transip.nl.

We’ll parse and store the files in a variable, then start simultaneous downloads of the files asynchronously. Finally, we run the Complete-BitsTransfer command to convert all the TMP files downloaded to their actual filenames.

$url = "http://speed.transip.nl"
$content = Invoke-WebRequest -URI "http://speed.transip.nl"

$randomBinFiles = $content.links | where {$_.innerHTML -like 'random*'} | select href
# Create links for each file entry
$randomBinFiles.foreach( { $_.href = $url + "/" + $_.href })

# Download the files in the background
$randomBinFiles.foreach({
    Start-BitsTransfer ($url + "/" + $_.href) -Asynchronous
})

# Close the transfers and convert from TMP to real file names
Get-BitsTransfer | Complete-BitsTransfer

Conclusion

Well, as long as you have an exact source URL, downloading files with PowerShell is pretty easy. I can see where it would be very handy, especially on GitHub if you don’t have access to Visual Studio to merge or download something to your machine. If you’d like to see any additional examples, please leave a comment below!