With the pandemic still an ongoing problem, though seemingly tailing off for now, and the UK lockdown still mostly in place, another RISC OS user group has opted to explore… Read more »
So one of the cool features of Teams is network diagnostics. Trying to gather network information remotely from an end-user sucks. So one trick you can do with Teams is place a call to them for a while, disconnect the call, then look in the network diagnostics which will give you some decent information about the call. You get to see their network adapter, whether or not they used ethernet or wifi, their wifi signal strength, their wifi channel, their wifi band, their wifi RSSI, and their overall network statistics including jitter, packet loss, RTT, et cetera.
You can look at past calls as well if they happen to call from Teams/Skype a lot.
Teams Admin Center > Users > [User] > Call History then from there you can look at the statistics for any individual participant including in meetings. The icons in the overview section are filters and the Advanced tab gives you more information.
The content below is taken from the original ( in /r/ Office365), to continue reading please visit the site. Remember to respect the Author & Copyright.
As a cloud project owner, you want your environment to run smoothly and efficiently. At Google Cloud, one of the ways we help you do that is through a family of tools we call Recommenders, which leverage analytics and machine learning to automatically detect issues and present you with optimizations that you can act on.
With Recommenders, our goal is to suggest quick, easy ways to optimize your cloud for price, performance, and security. Several Recommenders are already generally available, including VM Recommenders, Firewall Insights and IAM Recommender. In fact, there are many teams at Google Cloud who are working to build Recommenders that help you improve your cloud. But, we want to make sure it’s effortless and simple for you to find and take action on those recommendations. That’s why we’re also releasing the beta of our new Recommendation Hub, which highlights proactive recommendations in one place for you to view and act on.
Recommendation Hub is vital to bringing all of these optimization efforts together for you to see and take action on. Not only does the Hub capture the most impactful opportunities in your projects, but it also helps guide you across Google Cloud in general. Whether it’s networking, security, compute and storage resources, cost and billing, or Anthos, the Recommendation Hub will give you the tools you need to prioritize, analyze, and act on all of these valuable insights and recommendations.
Recommendation Hub and Recommenders are also part of a bigger initiative at Google Cloud to use machine learning and analytics to help you make better decisions, drive down costs, and automate your operations. There will be more announcements on that soon, but for now, let’s explore some of the Recommenders currently available for you to use.
Optimize resources for cost and performance with VM Recommenders
There will come a day when you might need to scale your virtual machine (VM) instances up or down. For that, we’ve got two types of Recommenders available: one to help you optimize your VMs for cost and performance, and a second to help you identify and delete (or back up) your unused VMs and persistent disks (PD) to avoid paying for resources you don’t use.
All of this helps you properly balance your performance and cost based on your unique situation. One customer, VuClip, decided to experiment with the Idle VM Recommender and is now making it a key part of how they optimize their cloud environment:
“We were in the midst of a hackathon recently, and we decided to test out Google Cloud’s Idle VM Recommender. We quickly learned that we had over 200 VMs that were sitting idle, but ultimately costing us money, that we wouldn’t have otherwise known about. The real bonus was that it only took a matter of seconds for Google Cloud to shine light on these idle VMs.” – Hrushikesh Kulkarni, Associate Director of Technology, VuClip
Secure your network with Firewall Insights
Firewall rule management is a constant challenge for security and network engineers. Firewall configuration can grow in complexity as more accesses are added over time, making it really hard to maintain. Firewall Insights, now in beta, is a new tool that helps secure your cloud environment by detecting and providing easy remediation options for a number of key firewall issues, including:
Shadowed rules that can’t be reached during firewall rule evaluation because they overlap with higher-priority rules
Unnecessary allow rules, open ports and IP ranges
Sudden hit increases on firewall rules (and a drill down into the source of the traffic) that signal an emerging attack
Redundant firewall rules, which can be cleaned up to reduce the total firewall rule count
Deny firewall rules with hit counts from sources trying to access unauthorized IP ranges and ports
Flowmon, a company that develops network performance monitoring and network security products, has been using Firewall Insights to gain new insights into its existing firewall rules:
“Firewall Insights has already proven to be an extremely valuable tool. With barely any effort, it gives us precise knowledge about what our firewall rules are actually doing. Through that, we’re able to optimize all of our firewall rules quickly and easily.” – Boris Parák, Cloud Product Manager, Flowmon
For more information on using Firewall Insights (which is also available in Network Intelligence Center), please reference our documentation or check out this video:
Lock down unwanted access with IAM Recommender
In addition to firewall rules, permissions play another crucial role in your overall security posture. WithIAM Recommender, you can remove unwanted access to Google Cloud resources with smart access control recommendations. IAM Recommender uses machine learning to automatically detect overly permissive access and help security teams figure out what permissions their project members really need. Not only does this help establish least-privilege best practices and reduce your organization’s security risks, but also prevents accidental changes to your data and cloud infrastructure. Here’s a video to show you how it works:
Many more Recommenders coming soon
We’re busy building more Recommenders which will appear in Recommender Hub. Here are a few that you can expect to see within the next few months:
Cost and performance
Compute Engine cross-family recommendations: Select the optimal VM family for your workload (e.g., memory-optimized).
Committed Use Discount (CUD) maximizer: Keep your cloud costs on budget by making sure you utilize your discounts to the fullest.
Security
GKE RBAC: Assess and remove over-granted permissions.
Security keys: Protect high-risk users against phishing by implementing phone-as-a-security-key.
Reliability, availability
Compute Engine predictive auto-scaling: Reduce latency and costs by scaling compute proactively.
With Recommenders, we’re trying to take the guesswork and toil out of keeping your cloud running optimally. To learn more about how Recommenders can help you, please check out our upcoming session “Cloud is Complex. Managing it Shouldn’t Be” during our Next OnAir digital event.
Questions linger over involvement of biz linked to Dominic Cummings and Vote Leave campaign
UK government has published the contracts it holds with private tech firms and the NHS for the creation of a COVID-19 data store, just days after campaigners fired legal shots over a lack of transparency.…
Sega marked its 60th anniversary this week with a tiny version of the Game Gear. But that's not the only thing on the company's mind at the minute. It's working on a system that would turn Japanese arcades into small data centers.According to Weekly…
Cameyo , the Virtual Application Delivery platform provider, today announced the results of a new survey and a report from IT analyst firm Enterprise Read more at VMblog.com.
If you’re not registered somewhere in the union, you can’t use the TLD
Any Brit based in the UK, and not in the EU, will have their .eu domain taken away from them on January 1, 2021, according to the latest iteration of rules published by the TLD’s operator EURid.…
Now you can program like a native with your £899 Surface Pro X – keyboard not included
Good news for those who have splashed the cash on Microsoft’s flagship Surface Pro X – the software behemoth has emitted an ARM64 build of Visual Studio Code.…
Retro computers are great, but what really makes a computer special is how many other computers it can talk to. It’s all about the network! Often, getting these vintage rigs online requires a significant investment in dusty old network cards from eBay and hunting down long-corrupted driver discs to lace everything together. A more modern alternative is to use something like PiModem to do the job instead.
PiModem consists of using a Raspberry Pi Zero W to emulate a serial modem, providing older systems with a link to the outside world. This involves setting up the Pi to use its hardware serial port to communicate with the computer in question. A level shifter is usually required, as well as a small hack to enable hardware flow control where necessary. It’s then a simple matter of using tcpser and pppd so you can talk to telnet BBSs and the wider Internet at large.
It’s a tidy hack that makes getting an old machine online much cheaper and easier than using hardware of the era. We’ve seen similar work before, too!
At Build 2020, Microsoft announced Azure Static Web Apps, a new way to host static web apps on Azure. In the past, static web apps, which are just a combination of HTML, JavaScript and CSS, could be hosted in a Storage Account or a regular Azure Web App.
When you compare Azure Static Web Apps with the Storage Account approach, you will notice there are many more features. Some of those features are listed below (also check the docs):
GitHub integration: GitHub actions are configured for you to easily deploy your app from your GitHub repository to Azure Static Web Apps
Integrated API support: APIs are provided by Azure Functions with an HTTP Trigger
Authentication support for Azure Active Directory, GitHub and other providers
Authorization role definitions via the portal and a roles.json file in your repository
Staging versions based on a pull request
It all works together as shown below:
As a Netlify user, this type of functionality is not new to me. Next to static site hosting, they also provide serverless functions, identity etc…
Let’s check out an example to see how it works on Azure…
GitHub repository
The GitHub repo I used is over at https://github.com/gbaeke/az-static-web-app. You will already see the .github/workflows folder that contains the .yml file that defines the GitHub Actions. That folder will be created for you when you create the Azure Static Web App.
The static web app in this case is a simple index.html that contains HTML, JavaScript and some styling. Vue.js is used as well. When you are authenticated, the application reads a list of devices from Cosmos DB. When you select a device, the application connects to a socket.io server, waiting for messages from the chosen device. The backend for the messages come from Redis. Note that the socket.io server and Redis configuration are not described in this post. Here’s a screenshot from the app with a message from device01. User gbaeke is authenticated via GitHub. When authenticated, the device list is populated. When you log out, the device list is empty. There’s no error checking here so when the device list cannot be populated, you will see a 404 error in the console.
Note: Azure Static Web Apps provides a valid certificate for your app, whether it uses a custom domain or not; in the above screenshot, Not secure is shown because the application connects to the socket.io server over HTTP and Mixed Content is allowed; that is easy to fix with SSL for the socket.io server but I chose to not configure that
The API
Although API is probably too big a word for it, the devices drop down list obtains its data from Cosmos DB, via an Azure Function. It was added from Visual Studio Code as follows:
add the api folder to your project
add a new Function Project and choose the api folder: simply use F1 in Visual Studio Code and choose Azure Functions: Create New Project… You will be asked for the folder. Choose api.
modify the code of the Function App to request data from Cosmos DB
In my case, I have a Cosmos DB database geba with a devices collection. Device documents contain an id and room field which simply get selected with the query: SELECT c.id, c.room FROM c.
Note: with route set to device, the API will need to be called with /api/device instead of /api/GetDevice.
The actual function in index.js is kept as simple as possible:
module.exports = async function (context, req) {
context.log('Send devices from Cosmos');
context.res = {
// status: 200, /* Defaults to 200 */
body: context.bindings.devices
};
};
Yes, the above code is all that is required to retrieve the JSON output of the Cosmos DB query and set is as the HTTP response.
Note that local.settings.json contains the Cosmos DB connection string in CosmosDBConnection:
You will have to make sure the Cosmos DB connection string is made known to Azure Static Web App later. During local testing, local.settings.json is used to retrieve it. local.settings.json is automatically added to .gitignore to not push it to the remote repository.
Local Testing
We can test the app locally with the Live Server extension. But first, modify .vscode/settings.json and add a proxy for your api:
With the above setting, a call to /api via Live Server will be proxied to Azure Functions on your local machine. Note that the IP address refers to the IP address of WSL 2 on my Windows 10 machine. Find it by running ifconfig in WSL 2.
Before we can test the application locally, start your function app by pressing F5. You should see:
Now go to index.html, right click and select Open with Live Server. The populated list of devices shows that the query to Cosmos DB works and that the API is working locally:
Notes on using WSL 2:
for some reason, http://localhost:5500/index.html (Live Server running in WSL 2) did not work from the Windows session although it should; in the screenshot above, you see I replaced localhost with the IP address of WSL 2
time skew can be an issue with WSL 2; if you get an error during the Cosmos DB query of authorization token is not valid at the current time, perform a time sync with ntpdate time.windows.com from your WSL 2 session
Deploy the Static Web App
Create a new Static Web App in the portal. The first screen will be similar to the one below:
You will need to authenticate to GitHub and choose your repository and branch as shown above. Click Next. Fill in the Build step as follows:
Our app will indeed run off the root. We are not using a framework that outputs a build to a folder like dist so you can leave the artifact location blank. We are just serving index.html off the root.
Complete the steps for the website to be created. You GitHub Action will be created and run for the first time. You can easily check the GitHub Action runs from the Overview screen:
To make sure the connection to Cosmos DB works, add an Application Setting via Configuration:
The Function App that previously obtained the Cosmos DB connection string from local.settings.json can now retrieve the value from Application Settings. Note that you can also change these settings via Azure CLI.
Conclusion
In this post, we created a simple web app in combination with an function app that serves as the API. You can easily create and test the web app and function app locally with the help of Live Server and a Live Server proxy. Setting up the web app is easy via the Azure Portal, which also creates a GitHub Action that takes care of deployment for you. In a next post, we will take a look at enabling authentication via the GitHub identity provider and only allowing authorized users to retrieve the list of devices.
Earlier, many users had raised objections over the company’s unilateral decision to hide the start of website URLs from Chrome’s omnibox. Fortunately, this concern has […]
Choosing the perfect Linux distribution that satisfies your personal needs and likings can be an impossible task, and oftentimes requires a hint of Stockholm syndrome as compromise. In extreme cases, you might end up just rolling your own distro. But while frustration is always a great incentive for change, for [Josh Moore] it was rather curiosity and playful interest that led him to create snakeware, a Linux distribution where the entire user space not only runs on Python, but is Python.
Imagine you would boot your Linux system, and instead of the shell of your choice, you would be greeted by an interactive Python interpreter, and everything you do on the system will be within the realms of that interpreter — that’s the gist of snakeware. Now, this might sound rather limiting at first, but keep in mind we’re talking about Python here, a language known for its versatility, with an abundance of packages that get things done quick and easy, which is exactly what [Josh] is aiming for. To get an idea of that, snakeware also includes snakewm, a graphical user interface written with pygame that bundles a couple of simple applications as demonstration, including a terminal to execute Python one-liners.
Note that this is merely a proof of concept at this stage, but [Josh] is inviting everyone to contribute and extend his creation. If you want to give it a go without building the entire system, the GitHub repository has a prebuilt image to run in QEMU, and the window manager will run as regular Python application on your normal system, too. To get just a quick glimpse of it, check the demo video after the break.
Sure, die-hard Linux enthusiasts will hardly accept a distribution without their favorite shell and preferable language, but hey, at least it gets by without systemd. And while snakeware probably won’t compete with more established distributions in the near future, it’s certainly an interesting concept that embraces thinking outside the box and trying something different. It would definitely fit well on a business card.