Email serves as a fundamental communication tool, offering flexibility for handling emails and coordinating communication. Organizations often use email forwarding to avoid missing important conversations during users’ absence, to distribute workload, etc. This includes forwarding to external domains, which introduces potential risks. Unmonitoredemail forwardingcan lead to data breaches, compliance issues, and other security concerns. This blogwill guide you to identifyand block external email forwarding, enhancingemail monitoringand security.
Review and Block External Email Forwarding in Microsoft 365
External email forwarding can be identified by checking the mailbox’s forwarding configuration and inbox rules in the Exchange Admin Center. However, you need to navigate to each mailbox to verify all the external forwarding configuration details, which is difficult and time-consuming.
To streamline this process, we have crafted a PowerShell script, which generates reports on external email forwarding configuration for each mailbox and the inbox rules configured with external forwarding. Upon your confirmation, the external forwarding will be blocked, and the inbox rules will be disabled. Isn’t it outstanding? Disabling external forwarding in one go! No navigation and no time-consuming!
Note: By default, the script considers guest users, mail contacts, and mail users as external accounts only. You will have the option to exclude guest users before blocking external forwarding.
Script Highlights:
The script automatically verifies and installs the Exchange PowerShell module(if not installed already) upon your confirmation.
Exports the ‘External email forwarding report’ and ‘Inbox rules with external forwarding report’ into a CSV file.
Blocks external forwarding configuration for all mailboxesupon confirmation.
Disables all the inbox ruleswith external forwarding configurationupon confirmation.
Allows to verify external email forwarding for specific mailboxesand blocks them.
Allows users to modify the generated CSV reportand provide it as input later to block the respective external forwarding configuration.
Provides the detailed log fileafter removing the external forwarding configuration and disabling the inbox rules with external forwarding.
The script can be executed with an MFA-enabled accounttoo.
The script supports Certificate-based authentication (CBA).
The script exports this report with the following attributes:
Display Name
User Principal Name
Forwarding Address
Forwarding SMTP Address
Deliver to Mailbox and Forward
Export Inbox Rules with External Email Forwarding Configuration Report
The script exports this report with the following attributes:
Mailbox Name
User Principal Name
Inbox Rule Name
Rule Identity
Forward To
Forward As Attachment To
Redirect To
Once the report is generated, youcanenter‘Y’ as the confirmation to block the external email forwarding and the inbox rules available in the outputfileas shown below.
Once external forwarding is blocked,you will get a txtlog file for both outputs as below.
Block External Email Forwarding in Microsoft 365 Using PowerShell – Script Execution Methods
Download the script.
Start the Windows PowerShell.
Select any of the methods provided to execute the script.
Method 1: You can run the script with MFA and non-MFA accounts.
./BlockExternalEmailForwarding.ps1
The above example lets you export the email forwarding configuration reportand the inbox rules with external forwarding reportinto a CSV file.
Method 2: You can explicitly pass credentials (username and password) and execute the script.
Identify and Block External Email Forwarding Configuration in Exchange Online
Utilize the PowerShell script andidentifyall theexternal forwarding configurationsand block them to have improved security. Explore more use cases you can achieve using the script below.
1. Report and Block all External Forwarding Configurations in Office 365
Identifying all the external email forwarding configuration, including inbox rules for all mailboxes is essential to detect suspicious forwarding and prevent data loss. You can verify the exported report, and you can decide whether to block all the external forwarding and disable all the inbox rules shown in the output based on your requirements.
Run the below cmdlet to identify and block external email forwarding for all mailboxes.
./BlockExternalEmailForwarding.ps1
Admins can get two reports, as said before, along with forwarding details. If you confirm to block all the configurations, the script will block all the email forwarding configurations in the output and send you the log file with blocked configuration details.
Note:If auto-email forwarding configuration is enabled, it might lead to severe security concerns if left unmonitored. You can block automatic email forwarding to external domainand protect your resources effectively.
2. Find and Restrict External Email Forwarding for Specific Users
If admins want to review the external forwarding configuration only for specific mailboxes, such as users working on a crucial project, they can prepare and include the CSV file with a list of required user addresses.
To find external email forwarding details for multiple users, you can include the CSV file path in the ‘MailboxNames’ parameter, as shown below.
Replace the <file path> with the path of your created CSV file. You can also find external forwarding configuration in shared mailboxes and block them by including the desired mailbox address. Remember that the column name containing the users’ UPN should be ‘User Principal Name’ as shown in the image below.
Sample Input:
3. Find and Block External Email Forwarding Excluding Guest Users
Users can collaborate with external guest users for various purposes like project collaboration, etc. So, forwarding emails to them will be legitimate in these scenarios. If admins want to get the external forwarding configurations excluding the external guest users in their organization, you can use the ‘-ExcludeGuests’ parameter while running the script, as shown below.
./BlockExternalEmailForwarding.ps1 -ExcludeGuests
The above script execution returns the emails forwarded to external users other than the guest users in the organization. It helps to easily narrow down to the desired result and block suspicious forwarding settings.
Note:Remember that the script considers the below as Guest users and excludes them from blocking.
External Guests – Users added as guests (having #EXT in their SMTP address and authorizes using your organization credentials) and invited as guest users (using their own credentials to login) in the organization.
Internal Guests– Your organizational users whose user type has changed to ‘Guest’ in Microsoft Entra ID.
4. Verify and Block External Email Forwarding Excluding Internal Guests
Before using B2B collaboration, organizations usually invite guest users and allow them to authorize by setting the internal credentials for them. Also, mail contacts and mail users are also considered as internal guests. They are added for working on projects, tasks, etc. Admins might want to exclude these internal guests in the external forwarding configuration report to drill down directly to desired results.
To achieve this, you can run the script by adding the ‘-ExcludeInternalGuests’ parameter.
As mentioned before, after executing the script, two reports will be generated: External email forwarding configuration report and Inbox rules with external forwarding report. After the reports are created, the script will prompt for confirmation to block all email forwarding settings displayed. If an admin identifies any forwarding settings that are legitimate and should not be blocked, they can choose ‘No’ when prompted.
Then, they should manually edit the output report by removing these legitimate entries from the output file. Once done, re-run the script with the ‘RemoveEmailForwardingFromCSV’ parameter.
Replace the file path with the path of the edited output file. So that instead of blocking all the forwarding settings, you can block only necessary configurations, excluding legitimate forwarding.
6. Disable Inbox Rules with External Forwarding Configurations
Like the above case, if admins don’t want to disable all the inbox rules shown in the output CSV file, admins can edit the file by removing inbox rules that shouldn’t be disabled in the organization. Then, run the script with the ‘-DisableInboxRuleFromCSV’ parameter as shown below.
Replace the <file path> with the edited output file path. After confirmation, all the inbox rules included in the CSV file will be disabled in your organization.
Efficiently Monitor External Email Forwarding in Exchange Online with AdminDroid
AdminDroid’sexternal forwarding configuration reportlets you view all the external email configuration with precise details, such as external domain, username who configured forwarding, count of forwarding users, external recipients, and more.
It helps to easily get the list of external domains to which emails are forwarded in your organization. Thus, you can easily decide and block suspicious domains effectively.
Moreover, the ‘mailbox with external forwarding inbox rules’ report displays all the inbox rules configured with external forwarding for each mailbox separately. It contains additional details like mailbox name, mailbox UPN, external address configured for forwarding as attachment to, forward to, redirect to, inbox rule condition details, rule processing status, and more. Also, you can get mailbox permission changes, user sign-in stats, and other crucial details.
Also, you can audit configuration changes made on external forwarding rules using the ‘Inbox Rules Configuration Changes with External Forwarding’ report. It helps to precisely identify any suspicious changes made to external forwarding inbox rules and revert them.
I hope this blog helps you to identifythe external forwarding configuration and block unwanted settings in your configuration. Utilize the script to findsuspicious forwardingeasily. Dropyour queries in the comment section. Happy securing!
Users are given access to files in the organization for various purposes, such as project collaboration, documentssharing, and accessing necessary resources for their roles. Based on the permissions granted, users can perform actions on files and folders such as deletion, download, modifying, and more.While it is essential to provide file access to users, monitoringtheir activities on the organization’sresources is crucial. Also, admins should track external users’ access tofilesand their activities to identifyunusual behaviors and excessive privilege grants, thereby safeguarding data.
Audit File Activities in SharePoint Online and OneDrive
Monitoring users’ activities on files and folders in SharePoint Online and OneDrive can be done using Microsoft Purview audit log search and the ‘Search-UnifiedAuditLog’ PowerShell cmdlet. These native methods retrieve all file activities, including creation, modification, deletion, file access, permission changes, and more in SharePoint Online and OneDrive.
However, tweaking the results to meet your specific needs can be challenging, as you must navigate to each event to get more detailed information about the activity. To overcome these difficulties, we have crafted a PowerShell script that efficiently addresses all your specific requirements, saving you time and effort.
Script Highlights:
The script automatically verifies and installs the Exchange Online PowerShell module(if not installed already) upon your confirmation.
Exports the file & folder usage reportfor the past 180 days into a CSV file.
Allows to track file usage for a specific date range.
Retrieves file activity by a specific userin the organization.
Retrieves file activities by external or guest users.
Allows to get file activities in SharePoint Online and OneDrive separately.
Allows you to get weekly or monthly usage reports effortlessly.
The script can be executed with an MFA-enabled accounttoo.
The script supports Certificate-based authentication(CBA).
Monitor File Activities in SharePoint Online and OneDrive
Utilize the PowerShell script to audit all the file & folder activities by users in SharePoint Online and OneDrive. Therefore, you can identify who access SPO files, unwanted actions performed on sensitive files, excessive privilege grants, unusual activities, and more., to secure the data efficiently. Explore the use cases you can attain using the script below.
1. Audit File Activities in SharePoint Online and OneDrive
Reviewing the file activities done by users in SharePoint Online and OneDrive is crucial to avoid data misuse and secure them. Run the script below to get a list of users’ file activities in Microsoft 365.
./AuditFileActivities.ps1
By referring to the exported report, dmins can audit file downloads, modifications, uploads, deletions, etc., in both SharePoint and OneDrive.
2. Retrieve Users’ File Activities for a Specific Date Range in Microsoft 365
If admins want to monitor recent file activities of users in their organization, i.e., for a custom period, run the script using the ‘-StartDate’ and ‘-EndDate’ parameters as shown below.
Remember that the date format should be mm/dd/yyyy. The above cmdlet returns the users’ file activities that happened from 20thJuly 2024 to 30thJuly 2024.
3. Track File Activities by a Specific Microsoft 365 User
If a user account is found to be compromised or any risky actions detected, monitoring their file activities is essential to safeguard your sensitive data. To identify specific users’ file activities in SPO and OneDrive, run the script with the ‘-PerformedBy’ parameter.
The above cmdlet retrieves all the file activities performed by Annie for the past 180 days. Similarly, for retrieving file activity for an external or guest user, replace the username with the respective external/guest username. Thus, you can get activities like files created by external users, file deletions, modifications, downloads, etc.
4. Track File Activities in SharePoint Online Using PowerShell
OneDrive files are users’ personal files in the organization. So, if admins want to focus only on SharePoint Online file activities, they can run the script with the ‘-SharePointOnline’ parameter.
./AuditFileActivities.ps1 -SharePointOnline
It displays a list of all the file activities performed in SharePoint Online alone. So, admins can easily audit file deletions, file moves, file downloads, etc., in SharePoint Online.
5. Retrieve File and Folder Activities in OneDrive
If any users are offboarded, admins can grant users’ OneDrive access to other usersfor backing up the crucial files related to any ongoing projects. Else, you can utilize Microsoft 365 backup for OneDrive accountsto retain the files. Users also share their files with others for review processes and various purposes. Admins might want to monitor file and folder activities performed on OneDrive to identify any suspicious actions. In such cases, they can run the script with the ‘-OneDrive’ parameter, as shown below.
./AuditFileActivities.ps1 -OneDrive
The above cmdlet lists all the file and folder activities performed in OneDrive. Admins can also identify if any user downloads or deletes any sensitive files before leaving the organization.
6. Audit File Usage Activity in SPO for Past 30 Days
If admins want to retrieve users’ file activities performed in SharePoint Online for the past 30 days, they can run the script with ‘-SharePointOnline’, ‘-StartDate’, and ‘-EndDate’ parameters as below.
After running the above cmdlet, you will get a list of detailed file usage activities by SharePoint users (i.e., past 30 days).
7. Schedule a Weekly and Monthly Report on File Activities in Microsoft 365
Admins might want to verify the users’ file activities in SharePoint Online and OneDrive on a weekly or monthly basis. In such cases, run the script with ‘-StartDate’ and ‘-EndDate’ parameters.
The above format retrieves file and folder activities for a month. You can schedule this script to run on the 1stof every month so that the script will retrieve file and folder activities for every month efficiently. You can automate the script using Task Scheduleror using Azure automationand every exported report will be saved in your system.
Similarly, the weekly report can be generated by modifying the StartDate as ‘(Get-date).date.adddays(-7).
Monitor File Activities in SharePoint Online and OneDrive Effectively with AdminDroid
AdminDroidMicrosoft 365 auditing tooloffers intensive reports on users’ file and folder activities in SharePoint Online and OneDrive. The tool containsthe below categories of reports to facilitateadmins with in-depth details and appealing charts.
All File & Folder Activities
File Access & Modification Events
File Uploads & Downloads
File Rename & Restore Actions
Files & Folder deletion from SharePoint
Files & Folder deletion from First Stage & Second Stage Recycle Bin
All File Activities by Admins
Folder Creation & Modification Events
Folder Rename & Restore Actions
Sharing & Access Events
All File/Folder Sharing Activities
Anonymous Link Creation & Access Events
Anonymous User Activities
Files Shared by External Users
File/Folder Accesses by External Users
Similarly, you will get the file activities reports for OneDrive too. These well-structured, in-depth reports let you stay informed about crucial file activities and help you act accordingly. You can also get alerts or schedule the required reports based on your requirements.
Moreover, AdminDroid provides 1900+ reports, 30+ stunning dashboards, and additional interesting features that make your Microsoft 365 management effortless. Download AdminDroidtoday and start managing your Microsoft 365 environment like never before!
I hope this blog helps you to effectively audit users’ file activitiesandimprove SharePoint Online security. Drop your queries in the comments section. Happy auditing!
I’ve been redoing our password expiration reminder script for my company, and due to some convoluted things it needs to do, I decided to invest some time learning some of the Advanced Powershell Function options.
The new script has only a single line outside of functions and using the "process" part of an Advanced Function, I do all the iteration via this, instead of foreach loops.
This ends with a nice single line that pipes the AD users that needs to receive an email, to the function that creates the object used by Send-MailMessage, then pipes that object and splats it to be used in the Send-MailMessage.
Can really encourage anyone writing scripts to take some time utilising this.
The content below is taken from the original ( ROOL releases updated Git Beta), to continue reading please visit the site. Remember to respect the Author & Copyright.
Up until now, I have regarded the Git client from ROOL as an exciting development but not exiting development but really that useful… All that has changed with the latest version!
The killer new new feature is support for pull and push. Previously you could not easily reference your existing repos or export from your system. Now you can!
You also get a selection of useful bug fixes and features including help in the iconbar menu.
Git is still being tested (looks good here) but I am sure ROOL would also be happy to recruit additional testers if you are interested….
The content below is taken from the original ( It’s time to design for repair), to continue reading please visit the site. Remember to respect the Author & Copyright.
A conversation with Jude Pullen Trying to repair almost anything can be a frustrating exercise. Repair is made more difficult by the way devices are designed and the ability to repair a device could be improved greatly if different design decisions were made. This moment in time demands a new generation of designers, engineers and […]
Linux’s compgen command is not actually a Linux command. In other words, it’s not implemented as an executable file, but is instead a bash builtin. That means that it’s part of the bash executable. So, if you were to type “which compgen”, your shell would run through all of the locations included in your $PATH variable, but it just wouldn’t find it.
$ which compgen
/usr/bin/which: no compgen in (.:/home/shs/.local/bin:/home/shs/bin:/usr/local/bin:
/usr/bin:/usr/local/sbin:/usr/sbin)
Obviously, the which command had no luck in finding it.
If, on the other hand, you type “man compgen”, you’ll end up looking at the man page for the bash shell. From that man page, you can scroll down to this explanation if you’re patient enough to look for it.
The cloud is constantly evolving, making upskilling and reskilling an important time investment. During November, we are featuring no-cost, on-demand and live training videos and courses for both technical and non-technical roles. Spanning introductory to more advanced levels, training options cover three of our most popular topics: generative AI, the basics of Google Cloud, and Google Cloud certification preparation.
Take a look at the suggested training below to move forward in your learning journey. You can even earn learning credentials to share on your developer profile, resumé and LinkedIn profile. Keep reading to get started.
Understanding Google Cloud foundational content for any role
Here are some options for learners who are either technical practitioners, or in tech-adjacent roles, like HR, sales, and marketing that work with Google Cloud products, or with teams who do.
These foundational-level courses help you learn about Google Cloud technology and how it can benefit your organization. Comprised of videos and quizzes, you should be able to finish them during part of a morning or afternoon when you have a bit of extra time. Complete all four courses below to help you prepare for the Cloud Digital Leader certification.
We have a mix of options to help you learn about generative AI as this area of technology becomes more available.
Introduction to Generative AI – No technical background required. This learning path explains generative AI, large learning models, responsible AI, and applying AI principles with Google Cloud.
Gen AI Bootcamp – This is a series of three sessions for developers who want to explore gen AI solutions. As the month goes on, the topics progress from introductory to advanced, and you can jump in at any time. The sessions are running as live events throughout November — register here to reserve your spot. They will also be available on-demand.
Prepare for Google Cloud Certification with no-cost resources
Earning a Google Cloud certification demonstrates your cloud knowledge to employers and validates your understanding of Google Cloud products and solutions. The skill set for each role-based certification is assessed using thorough industry-standard methods, and your achievement is recognized by passing a certification exam. Google Cloud certifications are among the highest paying IT certifications, and 87% of users feel more confident in their cloud skills when they are #GoogleCloudCertified.
Google Cloud certifications are offered for three levels: foundational (no-hands on experience required), associate (6+ months recommended of building on Google Cloud), and professional (3+ years of industry experience and 1+ year using Google Cloud recommended). They span roles like cloud architect, cloud engineer, and data engineer. Explore the full portfolio to find out which certification is right for you.
You’ll start working towards certification by utilizing training resources to help you prepare for the certification exam, and getting hands-on experience where indicated. Some certifications also offer a no-cost course to help you in preparing you for the exam: learn about the domains covered in the exam, assess your exam readiness, and create a study plan. No-cost exam guides and sample questions are also available for all the certification exams. Here are the courses to learn more:
Another way to work towards certification is by checking out ‘Level up your cloud career with Google Cloud credentials and certifications’ on Innovators Live. We talked with Google Cloud Champion Innovators, who offer tips for the certification journey and share how they’ve approached their learning journey. Watch here.
Microsoft Teams is introducing Recording and Transcript APIs to enhance the meeting experience and provide valuable insights for developers and users.
Developers can use these APIs to quickly generate meeting summaries, including key points, action items, and questions, and even capture meeting highlights. This feature can ensure that important information from meetings is not lost and can be easily referenced later.
These APIs can help you understand how people in a meeting feel and how engaged they are. It’s like understanding whether they are happy, excited, or bored during the conversation. This information can help determine how well the meeting is going and how people react to the discussion.
By analyzing previous meeting content, these APIs enable the generation of insights for follow-up actions. For example, in a sales context, the API could suggest what topics or strategies to discuss in the next sales call based on the outcomes of previous meetings. In the case of interviews, it can provide insights for improving the interview process.
The pricing for the Microsoft Teams Recording and Transcript APIs, as of September 1, 2023, is as follows:
Recording API: This API is priced at $0.03 per minute. This pricing applies to fetching and managing meeting recordings programmatically using the API.
Transcription API: The Transcription API is priced at $0.024 per minute. Transcription involves converting spoken words in the meeting into text, making it searchable and accessible.
The pricing seems to be reasonable. These features ultimately aim to make meetings more productive by automating tasks like note-taking and action item tracking, allowing users to focus on collaboration and problem-solving.
Mountpoint for Amazon S3 is an open source file client that makes it easy for your file-aware Linux applications to connect directly to Amazon Simple Storage Service (Amazon S3) buckets. Announced earlier this year as an alpha release, it is now generally available and ready for production use on your large-scale read-heavy applications: data lakes, machine learning training, image rendering, autonomous vehicle simulation, ETL, and more. It supports file-based workloads that perform sequential and random reads, sequential (append only) writes, and that don’t need full POSIX semantics.
Why Files? Many AWS customers use the S3 APIs and the AWS SDKs to build applications that can list, access, and process the contents of an S3 bucket. However, many customers have existing applications, commands, tools, and workflows that know how to access files in UNIX style: reading directories, opening & reading existing files, and creating & writing new ones. These customers have asked us for an official, enterprise-ready client that supports performant access to S3 at scale. After speaking with these customers and asking lots of questions, we learned that performance and stability were their primary concerns, and that POSIX compliance was not a necessity.
When I first wrote about Amazon S3 back in 2006 I was very clear that it was intended to be used as an object store, not as a file system. While you would not want use the Mountpoint / S3 combo to store your Git repositories or the like, using it in conjunction with tools that can read and write files, while taking advantage of S3’s scale and durability, makes sense in many situations.
All About Mountpoint Mountpoint is conceptually very simple. You create a mount point and mount an Amazon S3 bucket (or a path within a bucket) at the mount point, and then access the bucket using shell commands (ls, cat, dd, find, and so forth), library functions (open, close, read, write, creat, opendir, and so forth) or equivalent commands and functions as supported in the tools and languages that you already use.
Under the covers, the Linux Virtual Filesystem (VFS) translates these operations into calls to Mountpoint, which in turns translates them into calls to S3: LIST, GET, PUT, and so forth. Mountpoint strives to make good use of network bandwidth, increasing throughput and allowing you to reduce your compute costs by getting more work done in less time.
Installing and UsingMountpoint for Amazon S3 Mountpoint is available in RPM format and can easily be installed on an EC2 instance running Amazon Linux. I simply fetch the RPM and install it using yum:
For the last couple of years I have been regularly fetching images from several of the Washington State Ferry webcams and storing them in my wsdot-ferry bucket:
I collect these images in order to track the comings and goings of the ferries, with a goal of analyzing them at some point to find the best times to ride. My goal today is to create a movie that combines an entire day’s worth of images into a nice time lapse. I start by creating a mount point and mounting the bucket:
I can traverse the mount point and inspect the bucket:
$ cd wsdot-ferry
$ ls -l | head -10
total 0
drwxr-xr-x 2 jeff jeff 0 Aug 7 23:07 2020_12_30
drwxr-xr-x 2 jeff jeff 0 Aug 7 23:07 2020_12_31
drwxr-xr-x 2 jeff jeff 0 Aug 7 23:07 2021_01_01
drwxr-xr-x 2 jeff jeff 0 Aug 7 23:07 2021_01_02
drwxr-xr-x 2 jeff jeff 0 Aug 7 23:07 2021_01_03
drwxr-xr-x 2 jeff jeff 0 Aug 7 23:07 2021_01_04
drwxr-xr-x 2 jeff jeff 0 Aug 7 23:07 2021_01_05
drwxr-xr-x 2 jeff jeff 0 Aug 7 23:07 2021_01_06
drwxr-xr-x 2 jeff jeff 0 Aug 7 23:07 2021_01_07
$
$ cd 2020_12_30
$ ls -l
total 0
drwxr-xr-x 2 jeff jeff 0 Aug 7 23:07 fauntleroy_holding
drwxr-xr-x 2 jeff jeff 0 Aug 7 23:07 fauntleroy_way
drwxr-xr-x 2 jeff jeff 0 Aug 7 23:07 lincoln
drwxr-xr-x 2 jeff jeff 0 Aug 7 23:07 trenton
drwxr-xr-x 2 jeff jeff 0 Aug 7 23:07 vashon_112_north
drwxr-xr-x 2 jeff jeff 0 Aug 7 23:07 vashon_112_south
drwxr-xr-x 2 jeff jeff 0 Aug 7 23:07 vashon_bunker_north
drwxr-xr-x 2 jeff jeff 0 Aug 7 23:07 vashon_bunker_south
drwxr-xr-x 2 jeff jeff 0 Aug 7 23:07 vashon_holding
$
$ cd fauntleroy_holding
$ ls -l | head -10
total 2680
-rw-r--r-- 1 jeff jeff 19337 Feb 10 2021 17-12-01.jpg
-rw-r--r-- 1 jeff jeff 19380 Feb 10 2021 17-15-01.jpg
-rw-r--r-- 1 jeff jeff 19080 Feb 10 2021 17-18-01.jpg
-rw-r--r-- 1 jeff jeff 17700 Feb 10 2021 17-21-01.jpg
-rw-r--r-- 1 jeff jeff 17016 Feb 10 2021 17-24-01.jpg
-rw-r--r-- 1 jeff jeff 16638 Feb 10 2021 17-27-01.jpg
-rw-r--r-- 1 jeff jeff 16713 Feb 10 2021 17-30-01.jpg
-rw-r--r-- 1 jeff jeff 16647 Feb 10 2021 17-33-02.jpg
-rw-r--r-- 1 jeff jeff 16750 Feb 10 2021 17-36-01.jpg
$
As you can see, I used Mountpoint to access the existing image files and to write the newly created animation back to S3. While this is a fairly simple demo, it does show how you can use your existing tools and skills to process objects in an S3 bucket. Given that I have collected several million images over the years, being able to process them without explicitly syncing them to my local file system is a big win.
Mountpoint for Amazon S3 Facts Here are a couple of things to keep in mind when using Mountpoint:
Pricing – There are no new charges for the use of Mountpoint; you pay only for the underlying S3 operations. You can also use Mountpoint to access requester-pays buckets.
Performance – Mountpoint is able to take advantage of the elastic throughput offered by S3, including data transfer at up to 100 Gb/second between each EC2 instance and S3.
Credentials – Mountpoint accesses your S3 buckets using the AWS credentials that are in effect when you mount the bucket. See the CONFIGURATION doc for more information on credentials, bucket configuration, use of requester pays, some tips for the use of S3 Object Lambda, and more.
Operations& Semantics – Mountpoint supports basic file operations, and can read files up to 5 TB in size. It can list and read existing files, and it can create new ones. It cannot modify existing files or delete directories, and it does not support symbolic links or file locking (if you need POSIX semantics, take a look at Amazon FSx for Lustre). For more information about the supported operations and their interpretation, read the SEMANTICS document.
Storage Classes – You can use Mountpoint to access S3 objects in all storage classes except S3 Glacier Flexible Retrieval, S3 Glacier Deep Archive, S3 Intelligent-Tiering Archive Access Tier, and S3 Intelligent-Tiering Deep Archive Access Tier.
Open Source – Mountpoint is open source and has a public roadmap. Your contributions are welcome; be sure to read our Contributing Guidelines and our Code of Conduct first.
Hop On As you can see, Mountpoint is really cool and I am guessing that you are going to find some awesome ways to put it to use in your applications. Check it out and let me know what you think!
In a move that promises to simplify the way we use computers, Microsoft has unveiled a new feature in conjunction with its Windows 365 service – the Windows 365 Switch. This innovative feature is now available for public preview today.
Introducing Windows 365 Switch
Windows 365 Switch offers fluid transitions between a Windows 365 Cloud PC and the local desktop. It utilizes the same keyboard commands that users are accustomed to, and the transition can be done with a simple mouse click or swipe gesture.
Advantages for BYOD Users
This marks a significant advantage for those in BYOD (bring your device) scenarios. The ability to easily switch from a personal device to a secure, company-owned cloud PC offers flexibility, security, and peace of mind. It eliminates the fear of a lost or stolen device compromising company data.
Prerequisites for Windows 365 Switch
Microsoft has laid out specific criteria for utilizing the Windows 365 Switch:
A Windows 11-based endpoint (Windows 11 Pro and Enterprise are currently supported)
Enrollment in the Windows Insider Program (Beta Channel)
A Windows 365 Cloud PC license
Once these prerequisites are met, users can download the Windows 365 App, version 1.3.177.0 or newer, from the Microsoft Store. For convenience, IT admins can deploy the app for end users via Microsoft Intune.
The Switch Experience
After installing the Windows 365 App, users can expect a short wait, typically a few hours, for the switch feature to become fully enabled. Subsequently, the switch can be invoked either via the Task View feature adjacent to the Search button on the Windows 11 taskbar or via the Windows 365 app.
This significant step in the evolution of desktop computing brings the industry closer to a reality where the boundaries between local and cloud computing become blurred. As we eagerly anticipate what more Microsoft has in store for Windows 365, the imminent official release of the Windows 365 Switch is a clear leap in that direction.
Microsoft has released a new zoom controls feature in preview for Microsoft Teams. This update allows participants to zoom in and out while viewing content on a shared screen in Teams meetings and calls.
Up until now, Microsoft Teams only allowed meeting attendees to use pinch to zoom gesture on trackpads or other shortcuts to view content such as Excel Spreadsheets or PowerPoint presentations. The new zoom controls should be a welcome addition for people with low vision or visual impairment.
“Users in a Teams call or meeting will now see new buttons to zoom in, zoom out and restore the original size of the incoming screen share. This will greatly enhance the experience of users viewing screen share,” the Office Insider team explained.
To try out zoom controls, IT admins will have to sign up for the Microsoft Teams public preview program. They will need to configure an update policy in the Microsoft Teams admin center. However, keep in mind that meeting participants will not be able to view zoom controls while using the watermarking feature during Teams meetings.
Microsoft Teams zoom controls available for desktop and web users
As of this writing, the feature is only available in the Microsoft Teams app for Windows, macOS, and web app. It remains to be seen if Microsoft plans to add zoom controls to the Teams mobile clients.
In related news, Microsoft is getting ready to make the new Teams 2.0 client the default experience on Windows later this year. Last week, Microsoft Product Lead for Teams 2.0 Anupam Pattnaik confirmed in the first episode of Petri’s UnplugIT podcast that the app is also coming in preview to macOS, the web, and other platforms later this year.
Microsoft Teams 2.0 debuted in public preview on Windows back in March 2023. The app has been rebuilt from the ground up to improve performance and reduce power consumption on Windows devices.
Firewalls are a critical component of your security architecture. With the increased migration of workloads to cloud environments, more companies are turning to cloud-first solutions for their network security needs.
Google Cloud Firewall is a scalable, cloud-first service with advanced protection capabilities that helps enhance and simplify your network security posture. Google Cloud Firewall’s fully distributed architecture automatically applies pervasive policy coverage to workloads wherever they are deployed in Google Cloud. Stateful inspection enforcement of firewall policies occurs at each virtual machine (VM) instance.
Cloud Firewall offers the following benefits:
Built-in scalability: With Cloud Firewall, the firewall policy accompanies each workload as part of the forwarding fabric, which enables the service to scale intrinsically. This can relieve customers of the operational burden to spend time and resources to help ensure scalability.
Availability: Cloud Firewall policies automatically apply to workloads wherever they are instantiated in the Google Cloud environment. The fully distributed architecture can allow for precise rule enforcement, even down to a single VM interface.
Simplified management: Cloud Firewall security policies for each workload are independent of the network architecture, subnets and routing configuration. The context-aware and dynamically updating objects for firewall rules enable simplified configuration, deployment and ongoing maintenance.
How to migrate from on-prem to Cloud Firewall
Most on-premises firewall appliances, either virtual or physical, are deployed in one of two modes:
Zone-based that creates trusted and untrusted zones to apply firewall policies; or
Access Control Lists (ACL) applied to an interface.
In both cases, the firewall’s primary purpose is to protect one perimeter or network segment from another. For example, you may use a zone based firewall to filter traffic from an “untrusted” to a “trusted” zone. Similarly, you may have an ACL-based firewall to protect an “inside” network segment from an “outside” network segment.
However, that strategy is not the best approach with Google Cloud Firewall policies and rules. Cloud Firewall is not designed to act as a perimeter device; rather, Cloud Firewall is a fully distributed set of rules to help protect individual resources, such as VMs. However, most of our customers want to replicate their on-prem firewall logic and apply it to their cloud environment. Take the following example:
There are a lot of similar components shared between on-prem firewall appliance rules and Cloud Firewall rules. However, some critical differences between them can make a migration from firewall appliances to Cloud Firewalls a challenging task, for example:
Traditional firewalls protect a perimeter. In Google Cloud, firewall rules protect resources. This is done through the concept of “targets,” which specify which resources a given firewall rule applies to.
There are multiple types of firewall options available in Google Cloud (hierarchical firewall policies, global/regional firewall policies, and Virtual Private Cloud (VPC) firewall rules). Deciding which type of rules to use, and how to configure the rules with your cloud network architecture requires review and planning.
Furthermore, there are some additional firewall rules that may be needed in a cloud environment when compared to an on-prem firewall. For example, you may need to create ingress firewall rules to allow Google Cloud health check traffic to load balancer backends or you may need to create an egress rule to allow VMs access to use the Google Cloud APIs. Further, on-prem firewalls often have additional functions in on-prem networks including routing, NATing, VPN termination, and in some cases, Layer 7 inspection.
To assist customers with the migration from on-prem firewall appliances to Cloud services, including Cloud Firewall, we have developed a best practice guide that includes design and architecture considerations, and a side-by-side comparison of on-prem to Cloud Firewall rules. Check out the guide here for more information.
In this article, I’ll explain how to use the PowerShell Where-Object cmdlet to filter objects and data. I’ll provide a series of easy examples showing you how to filter files by name or date, how to filter processes by status or CPU usage, and more.
When using PowerShell, you will often receive an extremely large amount of data when querying your environment. For example, If you run the Get-AzureADUser cmdlet against an Azure Active Directory database with 100,000 users, you will get…well, 100,000 results. That may take some time to output to your console!
Normally you won’t need to get all that information. The Where-Object cmdlet is an extremely helpful tool that will allow you to filter your results to pinpoint exactly the information you’re looking for.
What is the PowerShell Where-Object command?
PowerShell Where-Object is by far the most often-used tool for filtering data. Mostly due to its power and, at the same time, simplicity. It selects objects from a collection based on their property values.
There are other cmdlets that allow you to filter data. The Select-Object cmdlet selects objects (!) or object properties. Select-String finds text in strings and files. They both are valuable and have their niche in your tool belt.
Here are some brief examples for you. Select-Object commands help in pinpointing specific pieces of information. This example returns objects that have the Name, ID, and working set (WS) properties of process objects.
How to filter an array of objects with PowerShell Where-Object
The task at hand is filtering a large pool of data into something more manageable. Thankfully, there are several methods we have to filter said data. Starting with PowerShell version 3.0, we can use script blocks and comparison operators, the latter being the more recent addition and the ‘preferred’ method.
With the Where-Object cmdlet, you’re constructing a condition that returns True or False. Depending on the result, it returns the pertinent output, or not.
Building filters with script blocks
Using script blocks in PowerShell goes back to the beginning. These components are used in countless places. Script blocks allow you to separate code via a filter and execute it in various places.
To use a script block as a filter, you use the FilterScript parameter. I’ll show you an example shortly. If the script block returns a value other than False (or null), it will be considered True. If not, False.
Let’s show this via an example: You have been assigned a task from your manager to determine all the services on a computer that are set to Disabled.
We will first gather all the services with Get-Service cmdlet. This pulls all the attributes of all the services on the computer in question. Using the PowerShell pipeline, we can then ‘pipe’ the gathered results using the FilterScript parameter. We can use this example below to find all the services set to Disabled.
($_.StartType -EQ 'Disabled')
First off, if we just use the Get-Service cmdlet, we get the full list of services. And there were quite a few more screens of services beyond the image below.
Not exactly what we’re looking for. Once we have the script block, we pass it right on to the FilterScript parameter.
We can see this all come to fruition with this example. We are using the Get-Service cmdlet to gather all the disabled services on our computer.
There we go. Now, we have the 15 services that are set to Disabled, satisfying our request.
Filtering objects with comparison operators
The issue with the prior method is it makes the code more difficult to understand. It’s not the easiest syntax for beginners to get ramped up with PowerShell. Because of this ‘learning curve’ issue, the engineers behind PowerShell produced comparison statements.
These have more of a flow with them. We can produce some more elegant, efficient, and ‘easier-to-read’ code using our prior example.
See? A little more elegant and easier to read. Using the Property parameter and the eq operator as a parameter allows us to also pass the value of Disabled to it. This eliminates our need to use the script block completely!
Containment operators
Containment operators are useful when working with collections. These allow you to define a condition. There are several examples of containment operators we can use. Here are a few:
-contains – Filter a collection containing a property value.
-notcontains – Filter a collection that does not contain a property value.
-in – Value is in a collection, returns property value if a match is found.
For case sensitivity, you can append ‘c’ at the beginning of the commands. For example, ‘-ccontains’ is the case-sensitive command for filtering a collection containing a property value.
Equality operators
There are a good number of equality operators. Here are a few:
-eq / -ceq – Value equal to specified value / case-sensitive option.
-le – value less than or equal to specified value.
Matching operators
We also have matching operators to use. These allow you to match strings inside of other strings, so that ‘Windows World Wide’ -like ‘World*’ returns a True output.
You use these just like when using containment operators.
Can you use multiple filter conditions with both methods?
Come to think of it, yes, you certainly can use both methods in your scripts. Even though comparison operators are more modern, there are times when working with more complex filtering requirements will dictate you to use script blocks. You’ll be able to find the balance yourself as you learn and become more proficient with your scripts.
Filtering with PowerShell Where-Object: Easy Examples
Let’s go through some simple examples of using the Where-Object cmdlet to determine pieces of information. Eventually, we’ll be able to accomplish tasks with ease.
Filtering files by name
We can certainly filter a directory of files that match specific criteria. We can use the Get-ChildItem cmdlet to first gather the list of files in my Downloads folder. Then, I use the Where-Object cmdlet with the ‘BaseName‘ parameter to find all files that have ‘Mail’ in the filenames.
We can use also wildcard characters here. Let’s give it a whirl:
Piece of cake. So, imagine a scenario where you have a folder with 25,000 files in it, and all the filenames are just strings of alphanumeric characters. Being able to quickly find the file(s) with an exact character match is ideal and a HUGEtimesaver!
Filtering files by date
We can use the same commands, Get-ChildItem and Where-Object, to find files based on dates, too. Let’s say we want to find all files that were created or updated in the last week. Let’s do this!
We are using the LastWriteTime property and the Get-Date and AddDays parameters to make this work. It works wonderfully.
Filtering processes by name
Because it is SO much fun working with Windows Services, let’s continue in this lovely realm. We are trying to determine the name of the ‘WWW’ service. We can use the ‘Property‘ parameter again.
Get-Service | Where-Object -Property Name -Contains 'W3SVC'
Filtering processes by status
There are several properties with each service, so we can also use a containment operator to gather a list of all services that are in a Running state.
Get-Service | Where-Object -Property Status -Contains 'Running'
Filtering processes by name and status
Remember what I said about script blocks? Let’s use one here to accomplish to filter processes by name and status. We will get all the services that are running but also have a StartType parameter set to Manual. Here we go!
You can also use equality operators with Where-Object to compare values. Here, we’ll use an operator and the Get-Process command to filter all running processes on our computer based on CPU usage.
Let’s use a script block to find all processes that are using between 4 and 8 percent of the CPU.
Our cmdlet also lets you use logical operators to link together multiple expressions. You can evaluate multiple conditions in one script block. Here are some examples.
-and – The script block evaluates True if the expressions are both logically evaluated as True
-or – The block evaluates to True when one of the expressions on either side are True
-xor – The script block evaluates to True when one of the expressions is True and the other is False.
-not or ‘!’ – Negates the script element following it.
Let me show you an example that illustrates this concept.
Finding files of a specific type with a specific size
You’ve already seen a few examples of the ‘-filter‘ command above. This is the main example of using filter parameters in your commands and scripts. It lets you home in on the precise data you’re looking for. Let me show you an example.
This command filters out all the files in the folder for PDF files. It then pipes that to the Where-Object cmdlet, which will further narrow the list down to PDF files that are 150K or larger. Very useful!
Conclusion
The ‘Where-Object’ cmdlet is very powerful in helping you quickly pinpoint exactly the data points you are looking for. Being able to check all Services that are set to Automatic yet are not Running can be extremely helpful during troubleshooting episodes. And using it to find errant, high-CPU processes in a programmatic way can also help you with scripting these types of needs.
If you have any comments or questions, please let me know down below. Thank you for reading!
We recently released our largest update to Chocolatey Central Management so far. Join Gary to find out more about Chocolatey Central Management and the new features and fixes we’ve added to this release.
Chocolatey Central Management provides real-time insights, and deployments of both Chocolatey packages and PowerShell code, to your client and server devices.
Industry after industry being transformed by software. It started with industries such as music, film and finance, whose assets lent themselves to being easily digitized. Fast forward to today, and we see a push to transform industries that have more physical hardware and require more human interaction, for example healthcare, agriculture and freight. It’s harder to digitize these industries – but it’s arguably more important. At Einride, we’re doing just that.
Our mission is to make Earth a better place through intelligent movement, building a global autonomous and electric freight network that has zero dependence on fossil fuel. A big part of this is Einride Saga, the software platform that we’ve built on Google Cloud. But transforming the freight industry is a formidable technical task that goes far beyond software. Still, observing the software transformations of other industries has shown us a powerful way forward.
So, what lessons have we learned from observing the industries that led the charge?
Lessons from re-architecting software systems
Most of today’s successful software platforms started in co-located data centers, eventually moving into the public cloud, where engineers could focus more on product and less on compute infrastructure. Shifting to the cloud was done using a lift-and-shift approach: one-to-one replacements of machines in datacenters with VMs in the cloud. This way, the systems didn’t require re-architecting, but it was also incredibly inefficient and wasteful. Applications running on dedicated VMs often had, at best, 20% utilization. The other 80% was wasted energy and resources. Since then, we’ve learned that there are better ways to do it.
Just as the advent of shipping containers opened up the entire planet for trade by simplifying and standardizing shipping cargo, containers have simplified and standardized shipping software. With containers, we can leave management of VMs to container orchestration systems like Kubernetes, an incredibly powerful tool that can manage any containerized application. But that power comes at the cost of complexity, often requiring dedicated infrastructure teams to manage clusters and reduce cognitive load for developers. That is a barrier of entry to new tech companies starting up in new industries — and that is where serverless comes in. Serverless offerings like Cloud Run abstract away cluster management and make building scalable systems simple for startups and established tech companies alike.
Serverless isn’t a fit for all applications, of course. While almost any application can be containerized, not all applications can make use of serverless. It’s an architecture paradigm that must be considered from the start. Chances are, an application designed with a VM-focused mindset won’t be fully stateless, and this prevents it from successfully running on a serverless platform. Adopting a serverless paradigm for an existing system can be challenging and will often require redesign.
Even so, the lessons from industries that digitized early are many: by abstracting away resource management, we can achieve higher utilization and more efficient systems. When resource management is centralized, we can apply algorithms like bin packing, and we can ensure that our workloads are efficiently allocated and dynamically re-allocated to keep our systems running optimally. With centralization comes added complexity, and the serveless paradigm enables us to shift complexity away from developers, as well as from entire companies.
Opportunities in re-architecting freight systems
At Einride, we have taken the lessons from software architecture and applied them to how we architect our freight systems. For example, the now familiar “lift-and-shift” approach is frequently applied in the industry for the deployment of electric trucks – but attempts at one-to-one replacements of diesel trucks lead to massive underutilization.
With our software platform, Einride Saga, we address underutilization by applying serverless patterns to freight, abstracting away complexity from end-customers and centralizing management of resources using algorithms. With this approach, we have been able to achieve near-optimal utilization of the electric trucks, chargers and trailers that we manage.
But to get these benefits, transport networks need to be re-architected. Flows in the network need to be reworked to support electric hardware and more dynamic planning, meaning that shippers will need to focus more on specifying demand and constraints, and less on planning out each shipment by themselves.
We have also found patterns in the freight industry that influence how we build our software. Managing electric trucks has made us aware of the differences in availability of clean energy across the globe, because – much like electric trucks – Einride Saga relies on clean energy to operate in a sustainable way. With Google Cloud, we can run the platform on renewable energy, worldwide.
The core concepts of serverless architecture — raising the abstraction level, and centralizing resource management — have the potential to revolutionize the freight industry. Einride’s success has sprung from an ability to realize ideas and then quickly bring them to market. Speed is everything, and the Saga platform – created without legacy in Google Cloud – has enabled us to design from the ground up and leverage the benefits of serverless.
Advantages of a serverless architecture
Einride’s architecture supports a company that combines multiple groundbreaking technologies — digital, electric and autonomous — into a transformational end-to-end freight service. The company culture is built on transparency and inclusivity, with digital communication and collaboration enabled by the Google Workspace suite. The technology culture promotes shared mastery of a few strategically selected technologies, enabling developers to move seamlessly up and down the tech stack — from autonomous vehicle to cloud platform.
If a modern autonomous vehicle is a data center on wheels, then Go and gRPC are fuels that make our vehicle services and cloud services run. We initially started building our cloud services in GKE, but when Google Cloud announced gRPC support for Cloud Run (in September 2019), we immediately saw the potential to simplify our deployment setup, spend less time on cluster management, and increase the scalability of our services. At the time, we were still very much in startup mode, making Cloud Run’s lower operating costs a welcome bonus. When we migrated from GKE to Cloud Run and shut down our Kubernetes clusters, we even got a phone call from our reseller who noticed that our total spend had dropped dramatically. That’s when we knew we had stumbled on game-changing technology!
In Identity Platform, we found the building blocks we needed for our Customer Identity and Access Management system. The seamless integration with Cloud Endpoints and ESPv2 enabled us to deploy serverless API gateways that took care of end-user authentication and provided transcoding from HTTP to gRPC. This enabled us to get the performance and security benefits of using gRPC in our backends, while keeping things simple with a standard HTTP stack in our frontends.
For CI/CD, we adopted Cloud Build, which gave all our developers access to powerful build infrastructure without having to maintain our own build servers. With Go as our language for backend services, ko was an obvious choice for packaging our services into containers. We have found this to be an excellent tool for achieving both high security and performance, providing fast builds of distro-less containers with an SBOM generated by default.
One of our challenges to date has been to provide seamless and fully integrated operations tooling for our SREs. At Einride, we apply the SRE-without-SRE approach: engineers who develop a service also operate it. When you wake up in the middle of the night to handle an alert, you need the best possible tooling available to diagnose the problem. That’s why we decided to leverage the full Cloud Operations suite, giving our SREs access to logging, monitoring, tracing, and even application profiling. The challenge has been to build this into each and every backend service in a consistent way. For that, we developed the Cloud Runner SDK for Go – a library that automatically configures the integrations and even fills in some of the gaps in the default Cloud Run monitoring, ensuring we have all four golden signals available for gRPC services.
For storage, we found that the Go library ecosystem around Cloud Spanner provided us with the best end-to-end development experience. We chose Spanner for its ease of use and low management overhead – including managed backups, which we were able to automate with relative ease using Cloud Scheduler. Building our applications on top of Spanner has provided high availability for our applications, as well as high trust for our customers and investors.
Using protocol buffers to create schemas for our data has allowed us to build a data lake on top of BigQuery, since our raw data is strongly typed. We even developed an open-source library to simplify storing and loading protocol buffers in BigQuery. To populate our data lake, we stream data from our applications and trucks via Pub/Sub. In most cases, we have been able to keep our ELT pipelines simple by loading data through stateless event handlers on Cloud Run.
The list of serverless technologies we’ve leveraged at Einride goes on, and keeping track of them is a challenge of its own – especially for new developers joining the team who don’t have the historical context of technologies we’ve already assessed. We built our tech radar tool to curate and document how we develop our backend services, and perform regular reviews to ensure we stay on top of new technologies and updated features.
But the journey is far from over. We are constantly evolving our tech stack and experimenting with new technologies on our tech radar. Our future goals include increasing our software supply chain security and building a fully serverless data mesh. We are currently investigating how to leverage ko and Cloud Build to achieve SLSA level 2 assurance in our build pipelines and how to incorporate Dataplex in our serverless data mesh.
A freight industry reimagined with serverless
For Einride, being at the cutting edge of adopting new serverless technologies has paid off. It’s what’s enabled us to grow from a startup to a company scaling globally without any investment into building our own infrastructure teams.
Industry after industry is being transformed by software, including complex industries that have more physical hardware and require more human interaction. To succeed, we must learn from the industries that came before us, recognize the patterns, and apply the most successful solutions.
In our case, it has been possible not just by building our own platform with a serverless architecture, but also by taking the core ideas of serverless and applying them to the freight industry as a whole.
Photoshop is one of the top graphic software on the market. Photoshop has surprising capabilities that professionals and hobbyists enjoy. You can convert a low-resolution logo to a high-resolution vector graphic in Photoshop. Photoshop is supposed to be for raster color but here is another surprise, it can also do some amount of vector graphic. […]
Microsoft Teams is getting a new update that will enable IT admins to deploy and manage teams at scale. Microsoft has announced in a message on the Microsoft 365 admin center that administrators will be able to create up to 500 teams with built-in or custom templates via Teams PowerShell cmdlet.
Specifically, Microsoft Teams will allow IT Pros to add up to 25 users to teams as members or owners. The upcoming update will also make it possible to add or remove members from existing teams. Moreover, admins will be able to send email notifications about the deployment status of each batch to up to 5 people.
Microsoft Teams’ new feature will make team management easier for IT admins
According to Microsoft, the ability to create and manage large numbers of teams at a time should help to significantly reduce deployment time. It will also make it easier for organizations to meet the specific scalability needs of their organization.
“Your organization may have a lot of teams that you use to drive communication and collaboration among your frontline workforce, who are spread across different stores, locations, and roles. Currently, there isn’t an easy solution to deploy, set up, and manage these teams and users at scale,” the company explained on the Microsoft 365 admin center.
Microsoft notes that this feature is currently under development, and it will become available for Microsoft Teams users in preview by mid-September. However, keep in mind that the timeline is subject to change.
Microsoft is also introducing a feature that will let users start group chats with the members of a distribution list, mail-enabled security group, or Microsoft 365 groups in Teams. Microsoft believes that this release will help to improve communication and boost the workflow efficiency of employees. You can check out our previous post for more details.
RISC OS Developments have been working away on their new TCP/IP stack for some time now and it is available to download from their website. So it seemed high time for TIB to wander over and have a look.
Installing the software The software is available as a zip download.
I would recommend reading the !!Read_Me_First text file (which also tells you how to remove from your system). The Reporting Document tells you how to report any bugs you might find. Features gives you a nice overview and a clear idea of the objectives with this software.
When you are ready to try, Double-click on !Install and follow the prompts, rebooting your machine.
In use The first indication that things have changed is that you have new options with Interfaces menu compared to previously.
You will also find that it has thoughtfully backed up your old version, just in case…
First impressions I do not have an IP6 setup so my main interest was in updating my existing setup (and being generally nosy). For IP4, this is a drop in replacement. Everything works as before (feels subjectively faster) and it all works fine. Like all the best updates, it is very boring (it just works). RISC OS Developments have done an excellent job of making it all painless. While the software is still technically in beta, I have no issues running on my main RISC OS machine.
What is really exciting is the potential this software opens up of having a maintained and modern TCP/IP stack with support for modern protocols, TCP/IP 6 and proper wifi support.
As per MC407050, Microsoft is going to retire the “Connect to Exchange Online PowerShell with MFA module” (i.e., EXO V1 module) on Dec 31, 2022. And the support ends on Aug 31, 2022. So, admins should move to EXO V2 module to connect to Exchange Online PowerShell with multi-factor authentication.
Why We Should Switch from EXO V1 Module?
Admins should install the Exchange Online remote PowerShell module and use the PowerShell cmdlet Connect-EXOPSSession to connect to Exchange Online PowerShell with MFA. The module uses basic authentication to connect to EXO. Due to basic authentication deprecation, Microsoft has introduced the EXO V2 module with improved security and data retrieval speed.
Connect to Exchange Online PowerShell with MFA:
To connect to Exchange Online PowerShell with MFA, you need to install the Exchange Online PowerShell V2 module. With this module, you can create a PowerShell session with both MFA and non-MFA accounts using the Connect-ExchangeOnline cmdlet.
Additionally, the Exchange Online PowerShell V2 module uses modern authentication and helps to create unattended scripts to automate the Exchange Online tasks.
To download and install the EXO V2 module & connect to Exchange Online PowerShell, you can use the script below.
#Check for EXO v2 module installation
$Module = Get-Module ExchangeOnlineManagement -ListAvailable
if($Module.count -eq 0)
{
Write-Host Exchange Online PowerShell V2 module is not available -ForegroundColor yellow
$Confirm= Read-Host Are you sure you want to install module? [Y] Yes [N] No
if($Confirm -match "[yY]")
{
Write-host "Installing Exchange Online PowerShell module"
Install-Module ExchangeOnlineManagement -Repository PSGallery -AllowClobber -Force
Import-Module ExchangeOnlineManagement
}
else
{
Write-Host EXO V2 module is required to connect Exchange Online. Please install module using Install-Module ExchangeOnlineManagement cmdlet.
Exit
}
}
Write-Host Connecting to Exchange Online...
Connect-ExchangeOnline
If you have already installed the EXO V2 module, you can use the “Connect-ExchangeOnline” cmdlet directly to create a PowerShell session with MFA and non-MFA accounts. For MFA accounts, it will prompt for additional authentication. After the verification, you can access Exchange Online data and Microsoft 365 audit logs.
Advantages of Using EXO V2 Module:
It uses modern authentication to connect to Exchange Online PowerShell.
A single cmdlet “Connect-ExchangeOnline” is used to connect to EXO with both MFA and non-MFA accounts.
It doesn’t require WinRM basic authentication to be enabled.
Helps to automate EXO PowerShell login with MFA. i.e., unattended scripts.
Contains REST API based cmdlets.
Provides exclusive cmdlets that are optimized for bulk data retrieval.
If you are using the Exchange Online Remote PowerShell module, it’s time to switch to the EXO V2 module. Also, you can update your existing scripts to adopt the EXO V2 module. Happy Scripting!
OK yall so I gotta ask as wild fires are scary as shit and they claim alot of my state as well as others each year… that stated, any that do off grid setups have you thought of anti fire precautions?
Below I have outlined a simple 10 dollar (does 100s of hotspots for 15 bucks lol)
OK ingredients per hotspot: -Bromochlorodifluoromethane powder> about 5 to 8 grams per Hotspot needed (1kilo is like 5 bucks max on alibaba)
-1 water Ballon> or any Ballon with extremely thin rubber on the walls just need it to pop very very easy
one small firecracker firework> the kind you get in a pack of 40 and it says ‘caution: don’t unpack the thing light as one’ or what ever it says but we all do it anyways cuz we are not as smart of creatures as we think we are 😆
-firework wick> you will use like 2 or 3 foot give or take per hotspot (a 10m rolls like 3 or 4 bucks I think almost anywhere that sells it)
-1 tiny sized rubber hair band per hotspot> the Lil Lil tiny ones that are like the size of a pinky finger round (a packs 50 cents at family dollar or use o-rings if you are a tool guy and have lil tiny o-rings)
-You need a balloon for when you fill the "homemade fire snuffer bomb" (patent pending) jkjk 😆
-you need a few straw for filling as well (and a small cardboard that can crease but stay ridged will help but can be done with out)
-peel apart bread ties or a roll of sone other very thin wire
OK so see we got about 15$USD of materials here, what your gunna do is…
1) put on gloves and at least a dust particulate 3m mask and some rubber gloves!
PSA of the day! –>(your bodies worth it and that powder is not really supposed to be huffed no matter the intent to or accidental …..cancer don’t care. Cancer happens either way if your adult dont be lazy get in the car or on the bicycle or long board like us millennials do so frequently it seems…. just go buy a pair of 99cent kitchen dish gloves at family dollar, and if you a kid then steal moms from under the sink for 20 mins it wont destroy them just rinse and return)
Now….
2) you are going to take said Bromochlorodifluoromethane powder and put around 7-10 grams in a ballon (this is where the carboard folded in a square angle crease comesin handy but if you skiped it take the drinking straw and use it to scoop a few grams at a time into the balloon)
Then…
3) take the pack of fire crackers apart and strip them into single firecracker units.
4) take 3ish foot of wick off the roll and then attach it to the wick of one fire cracker (overlap the two wicks so that the overlap about a inch then use a piece of wire to wrap around it In a spirl fashion from the fire cracker to the 3ish foot piece secure it as best you can so it will make good contact if triggered to activate save the day for some random forrest or nieghborhood or where ever it be!)
Now set that aside and return to the powder filled balloon…
5) get a hair band ready and have the extra balloon *** make sure its a powder free clean balloon*** I mentioned that it was for the help in filling in the list at the start!
6) blow into the clean Balloon enough so it’s the size off a orange or a apple or baseball or what ever you want to picture it as that’s that size… (***Big Tip: I use tape, after I blow in it, so I can put on a table edge and duck tape the thing to the table so it stays filled but can be removed and let deflate when ready😉)
OK on to…
7) place firecracker in the powder balloon shove it in the middle so that it is completely surrounded by the powder, but leave the long wick hanging out the balloon!
8)**** put a clean straw**** in the powder balloon (you dont want it contaminated on both ends if you used for filling with it!) Then put the hairband looped multiple times on its self (objective is: to have the band wrapped enough so once on the balloon neck and you pull the straw out you have a tightly sealed water balloon this is so it holds a slight amount of air)
9) take the clean air filled balloon from [step 6)] and then put on the opposing end of the straw snug as you can without to much air loss… squeeze air into the the powder Balloon from the air Balloon (if it is hard to get that much in due to loss when putting it on straw then fill the air balloon more and go round 2)
Finally….
10) when powder balloon is inflated slightly think like tennis ball ish sized and you feel it feel like a balloon when you squish lightly on it (not like a hackie sack more one of those bags of air that products get shipped with the ones that are a string of like seven bags of air…. I digress you get the picture) pull the straw when you are happy and you now have a homemade class D fire extinguisher….
Place that bad boy inside the housing of the miner and battery set up and drape the wick so it is laying all around the inside the enclosure you use.
Battery vents>wick gets lit by the battery igniting>firecracker goes #pop>powder from previously inflated balloon fills box> Battery fire fixed in a matter of 5 or less seconds from start of fire
That said if your yseing a big housing enclosure plan accordingly by your specs in size of enclosure… if it’s a big space maybe put 2 fire snuffer bombs in each be smart the nature your putting it in will prefer to not have a fire filled future in the event its a needed precaution!
There ya go!
thanks for the read If you made it to the end. That’s my solution lmk what yall did if you had a good solution as well! I’d love to hear it 😀
RISCOSbits have announced their new RISC OS rewards scheme. The idea is to reward loyal RISC OS users by offering them discounts on new RISC OS hardware.
To start with, they are offering anyone who can show they have purchased an Ovation Pro licence a 10% discount on any PiHard systems. We have previously reviewed the PiHard (which is now my main RISC OS machine at work and what I am typing this on).
This offer is also open to any existing RISCOSbits customers, who can also claim 10% on anew system.
To claim your discount, you should contact RISCOSbits directly.
There will be additional special offers. If you are on twitter, watch out for the hashtag #RISC_OS_Rewards