Uber can find a ride to carry your skis in 23 regions

The content below is taken from the original ( Uber can find a ride to carry your skis in 23 regions), to continue reading please visit the site. Remember to respect the Author & Copyright.

You can't often rely on ridesharing services for skiing trips. Few cars will have a rack, and even drivers of larger vehicles might balk if you try to stow your gear in the back. With Uber, at least, this won't be a problem going forward. It's deb…

Introducing E2, new cost-optimized general purpose VMs for Google Compute Engine

The content below is taken from the original ( Introducing E2, new cost-optimized general purpose VMs for Google Compute Engine), to continue reading please visit the site. Remember to respect the Author & Copyright.

General-purpose virtual machines are the workhorses of cloud applications. Today, we’re excited to announce our E2 family of VMs for Google Compute Engine featuring dynamic resource management to deliver reliable and sustained performance, flexible configurations, and the best total cost of ownership of any of our VMs.  

Now in beta, E2 VMs offer similar performance to comparable N1 configurations, providing:

  • Lower TCO: 31% savings compared to N1, offering the lowest total cost of ownership of any VM in Google Cloud.

  • Consistent performance: Your VMs get reliable and sustained performance at a consistent low price point. Unlike comparable options from other cloud providers, E2 VMs can sustain high CPU load without artificial throttling or complicated pricing. 

  • Flexibility: You can tailor your E2 instance with up to 16 vCPUs and 128 GB of memory. At the same time, you  can provision only only pay for the resources that you need with 15 new predefined configurations or the ability to use custom machine types

Since E2 VMs are based on industry-standard x86 chips from Intel and AMD, you don’t need to change your code or recompile to take advantage of this price-performance. 

E2 VMs are a great fit for a broad range of workloads including web servers, business-critical applications, small-to-medium sized databases and development environments. If you have workloads that run well on N1, but don’t require large instance sizes, GPUs or local SSD, consider moving them to E2. For all but the most demanding workloads, we expect E2 to deliver similar performance to N1, at a significantly lower cost. 

Dynamic resource management

Using resource balancing technologies developed for Google’s own latency-critical services, E2 VMs make better use of hardware resources to drive costs down and pass the savings on to you. E2 VMs place an emphasis on performance and protect your workloads from the type of issues associated with resource-sharing thanks to our custom-built CPU scheduler and performance-aware live migration.

You can learn more about how dynamic resource management works by reading the technical blog on E2 VMs.

E2 machine types

At launch, we’re offering E2 machine types as custom VM shapes or predefined configurations:

E2 machine types.png

We’re also introducing new shared-core instances, similar to our popular f1-micro and g1-small machine types. These are a great fit for smaller workloads like micro-services or development environments that don’t require the full vCPU.

config.png

E2 VMs can be launched on-demand or as preemptible VMs. They are also eligible forcommitted use discounts, bringing additional savings up to 55% for 3 year commitments. E2 VMs are powered by Intel Xeon and AMD EPYC processors, which are selected automatically based on availability. 

Get started

E2 VMs are rolling out this week to eight regions: Iowa, South Carolina, Oregon, Northern Virginia, Belgium, Netherlands, Taiwan and Singapore; with more regions in the works. To learn more about E2 VMs or other GCE VM options, check out our machine types page and our pricing page.

NEW: reqman, it’s like postman, but without GUI … tests your rest apis with simple yaml files

The content below is taken from the original ( in /r/ Python), to continue reading please visit the site. Remember to respect the Author & Copyright.

https://youtu.be/ToK-5VwxhP4 (one minute video)

Reqman is a command-line based tool (available on all platforms), which let you test your rest apis, by describing your requests/tests in simple yaml files, with any text editors.

Writing tests is really easy, and suitable for non-tech people

Available on pypi .. available on github

Read the Documentation

DEMO: an online yaml/reqman editor/tester

BONUS: An online tool to convert your swagger/openapi3 definitions, or a postman collection, to reqman's yaml tests files.

Reqman helps me a lot, in my everyday work, to automate my non-regressions tests (TNR) on 100apis (with more than 2000 tests). And I'm pretty sure that It could help someone else.

Download Reddit images in bulk with Reddit Image Grabber

The content below is taken from the original ( Download Reddit images in bulk with Reddit Image Grabber), to continue reading please visit the site. Remember to respect the Author & Copyright.

Now and again, we all may want to download images from Reddit, but it can be time-consuming when its tie to go through a ton of pages on the website. Luckily for you, there is an app out there that […]

This post Download Reddit images in bulk with Reddit Image Grabber is from TheWindowsClub.com.

The best portable (and affordable) USB MIDI controllers

The content below is taken from the original ( The best portable (and affordable) USB MIDI controllers), to continue reading please visit the site. Remember to respect the Author & Copyright.

Whether you’re a musician always on the go or just tight on space, there are plenty of reasons to pick up a portable MIDI controller.

I’ve been on the hunt for the perfect portable and affordable controller. (You don’t want to lose or break somethin… somethi…

Einride to launch commercial pilot of driverless electric pods with Coca-Cola European Partners

The content below is taken from the original ( Einride to launch commercial pilot of driverless electric pods with Coca-Cola European Partners), to continue reading please visit the site. Remember to respect the Author & Copyright.

Autonomous robotic road-riding cargo pod startup Einride has signed a new partner for a commercial pilot on Sweden’s roads, which should be a great test of the company’s electric driverless transportation pods. Einride will be providing service for Coca-Cola Coca-Cola European Partners, which is the official authorized bottler, distributor, sales and marketing company for Coca-Cola branded products in Sweden.

The partnership will see Einride commercially operating its transportation system between Coca-Cola European Partners’ warehouse in Jordbro outside Stockholm, and retailer Axfood’s own distribution hub, transporting Coca-Cola brand products to the retailer ahead of sending them off to local retail locations in Sweden.

Coca-Cola European Partners is looking to this partnership as part of its goal to continue to reduce emissions, since Einride’s system could potentially cut CO2 output by as much as 90% compared to current in-use solutions. This pilot is set to take place over the next few years, according to the two companies, and Einride says it hopes that it’ll be able to be on the road as early as some time next year, pending approval from the authorities since it’s a trial that will take place on public roads.

Einride announced $25 million in new funding in October, and has been running trials of the Einride Pod T-Pod electric transport vehicle it created on public roads since May.

Ohhh, you’re so rugged! Microsoft swoons at new Lenovo box pushing Azure to the edge

The content below is taken from the original ( Ohhh, you’re so rugged! Microsoft swoons at new Lenovo box pushing Azure to the edge), to continue reading please visit the site. Remember to respect the Author & Copyright.

Fix it to a wall, stick it on a shelf

While the public cloud might have once been all the rage, the cold light of day has brought the realisation that bandwidth, compliance and convenience means that something a little more local is needed.…

Nokia 2.3: HMD flings out €109 budget ‘droid with a 2-day battery

The content below is taken from the original ( Nokia 2.3: HMD flings out €109 budget ‘droid with a 2-day battery), to continue reading please visit the site. Remember to respect the Author & Copyright.

But get ready to flip your cables cos it’s microUSB

HMD Global, the licencee owner of the once-ubiquitous Nokia mobile phone brand, today unveiled its latest budget blower, the Nokia 2.3.…

AI-powered Lego sorter knows the shape of every brick

The content below is taken from the original ( AI-powered Lego sorter knows the shape of every brick), to continue reading please visit the site. Remember to respect the Author & Copyright.

For some people, rummaging through a bunch of Lego bricks is part of the fun. But if you've got an enormous collection or take on complicated builds, you probably have a system for sorting your pieces. Your solution probably doesn't involve AI, thoug…

Automate OS Image Build Pipelines with EC2 Image Builder

The content below is taken from the original ( Automate OS Image Build Pipelines with EC2 Image Builder), to continue reading please visit the site. Remember to respect the Author & Copyright.

Earlier in my career, I can recall being assigned the task of creating and maintaining operating system (OS) images for use by my development team. This was a time-consuming process, sometimes error-prone, needing me to manually re-create and re-snapshot images frequently. As I’m sure you can imagine, it also involved a significant amount of manual testing!

Today, customers still need to keep their images up to date and they do so either by manually updating and snapshotting VMs, or they have teams that build automation scripts to maintain the images, both of which can still be time consuming, resource intensive, and error-prone. I’m excited to announce the availability of EC2 Image Builder, a service that makes it easier and faster to build and maintain secure OS images for Windows Server and Amazon Linux 2, using automated build pipelines. The images created by EC2 Image Builder can be used with Amazon Elastic Compute Cloud (EC2) and on-premises, and can be secured and hardened to help comply with applicable InfoSec regulations. AWS provides security hardening policies that you can use as a starting point to meet the “Security Technical Implementation Guide (STIG)” standard needed to operate in regulated industries.

The pipelines that you can configure for EC2 Image Builder include the image recipe, infrastructure configuration, distribution, and test settings, to produce the resulting images. This includes the ability to automatically provision images as new software updates, including security patches, become available. As new images are created by the pipelines, you can additionally configure automated tests to be run to validate the image, before then distributing it to AWS regions that you specify. EC2 Image Builder can be used with EC2 VM Import/Export to build images in multiple formats for on-premises use, including VMDK, VHDX, and OVF. When testing you can use a combination of AWS-provided tests and custom tests that you have authored yourself.

Let’s take a look at how to get started using EC2 Image Builder.

Creating an OS Image Build Pipeline
From the console homepage I can quickly get started by clicking Create image pipeline. Here, I’m going to construct a pipeline that will build a custom Amazon Linux 2 image. The first step is to define the recipe which involves selecting the source image to start from, the build components to apply to the image being created, and the tests to be run.

Starting with the source image, I’m going to select a managed image provided by EC2 Image Builder. Note that I can also choose other images that either I have created, or that have been shared with me, or specify a custom AMI ID.

Next I select the build components to include in the recipe – in other words, the software I want to be installed onto the new image. From within the wizard I have the option to create a new build component by clicking Create build component. Build components have a name (and optional description), a target operating system, an optional AWS Key Management Service (KMS) key to encrypt the component, and a YAML document that specifies the set of customization steps for the component. Build components can also be versioned, so I have a lot of flexibility in customizing the software to apply to my image. I can create, and select, multiple build components and don’t have to do all my customization from one component.

For this post however I’ve clicked Browse build components and selected some Amazon-provided components for Amazon Corretto, Python 3 and PowerShell Core.

The final step for the recipe is to select tests to be applied to the image to validate it. Just as with build components, I can create and specify tests within the wizard, and I have the same capabilities for defining a test as I do a build component. Again though, I’m going to keep this simple and click Browse tests to select an Amazon-provided test that the image will reboot successfully (note that I can also select multiple tests).

That completes my recipe, so I click Next and start to define my pipeline. First, I give the pipeline a name and also select an AWS Identity and Access Management (IAM) role to associate with the EC2 instance to build the new image. EC2 Image Builder will use this role to create Amazon Elastic Compute Cloud (EC2) instances in my account to perform the customization and testing steps. Pipeline builds can be performed manually, or I can elect to run them on a schedule. I have the flexibility to specify my schedule using simple Day/Week/Month period and time-of-day selectors, or I can use a CRON expression.

I selected a managed IAM policy (EC2InstanceProfileForImageBuilder) with just enough permissions to use common AWS-provided build components and and run tests. When you start to use Image Builder yourself, you will need to set up a role that has enough permissions to perform your customizations, run your tests, and write troubleshooting logs to S3. As a starting point for setting up the proper permissions, I recommend that you attach the AmazonSSMManagedInstanceCore IAM policy to the IAM role attached to the instance.

Finally for the pipeline I can optionally specify some settings for the infrastructure that will be launched on my behalf, for example the size of instance type used when customizing my image, and an Amazon Simple Notification Service (SNS) topic that notifications can be forwarded to. I can also take control of Amazon Virtual Private Cloud related settings should I wish.

If the operating system of the image I am building is associated with a license, I can specify that next (or create a new license configuration on-the-fly), along with a name for my new image and also the AWS regions into which the new image will be shared, either publicly or privately.

Clicking Review, I can review all of my settings and finally click Create Pipeline to complete the process.

Even though when I configured my pipeline I asked for it to run daily at 06:00 hours UTC, I can still run it whenever I wish. Selecting the pipeline, I click Actions and then Run pipeline.

Once the build has completed, the AMI will be ready to launch from the Amazon EC2 console!

Thinking back to my earlier career years and the tasks assigned to me, this would have saved me so much time and effort! EC2 Image Builder is provided at no cost to customers and is available in all commercial AWS Regions. You are charged only for the underlying AWS resources that are used to create, store, and share the images.

— Steve

Amazon Braket – Get Started with Quantum Computing

The content below is taken from the original ( Amazon Braket – Get Started with Quantum Computing), to continue reading please visit the site. Remember to respect the Author & Copyright.

Nearly a decade ago I wrote about the Quantum Compute Cloud on April Fool’s Day. The future has arrived and you now have the opportunity to write quantum algorithms and to run them on actual quantum computers. Here’s what we are announcing today:

Amazon Braket – A fully managed service that allows scientists, researchers, and developers to begin experimenting with computers from multiple quantum hardware providers in a single place. Bra-ket notation is commonly used to denote quantum mechanical states, and inspired the name of the service.

AWS Center for Quantum Computing – A research center adjacent to the California Institute of Technology (Caltech) that will bring together the world’s leading quantum computing researchers and engineers in order to accelerate development of quantum computing hardware and software.

Amazon Quantum Solutions LabA new program to connect AWS customers with quantum computing experts from Amazon and a very select set of consulting partners.

What’s Quantum Computing
Ordinary (classical) computers use collections of bits to represent their state. Each bit is definitively 0 or 1, and the number of possible states is 2n if you have n bits. 1 bit can be in either of 2 states, 2 bits can be in any one of 4 states, and so forth. A computer with 1 MiB of memory has 2(8*1048576) states, excluding CPU registers and external storage. This is a large number, but it is finite, and can be calculated.

Quantum computers use a more sophisticated data representation known as a qubit or quantum bit. Each qubit can exist in state 1 or 0, but also in superpositions of 1 and 0, meaning that the qubit simultaneously occupies both states. Such states can be specified by a two-dimensional vector that contains a pair of complex numbers, making for an infinite number of states. Each of the complex numbers is a probability amplitude, basically the odds that the qubit is a 0 or a 1, respectively.

A classical computer can be in just one of those 2n states at a given time, but a quantum computer can occupy all of them in parallel.

If you have been in IT for any length of time, you know that Moore’s Law has brought us to the point where it possible to manufacture memory chips that store 2 tebibytes (as I write this) on a thumb drive. The physical and chemical processes that make this possible are amazing, and well worth studying. Unfortunately, these processes do not apply directly to the manufacture of devices that contain qubits; as I write this, the largest quantum computers contain about 50 qubits. These computers are built on several different technologies, but seem to have two attributes in common: they are scarce, and they must be run in carefully controlled physical environments.

How it Works
Quantum computers work by manipulating the amplitudes of the state vector. To program a quantum computer, you figure out how many qubits you need, wire them together into a quantum circuit, and run the circuit. When you build the circuit, you set it up so that the correct answer is the most probable one, and all the rest are highly improbable. Whereas classical computers use Boolean logic and are built using NOT, OR, and AND gates, quantum computers use superposition and interference, and are built using quantum logic gates with new and exotic names (X, Y, Z, CNOT, Hadamard, Toffoli, and so forth).

This is a very young field: the model was first proposed in the early 1980s, followed shortly by the realization that a quantum computer could perform simulations of quantum mechanical systems that are impossible on a classical computer. Quantum computers have applications to machine learning, linear algebra, chemistry, cryptography, simulations of physics, search, and optimization. For example, Shor’s Algorithm shows how to efficiently factor integers of any size (this video has a really good explanation).

Looking Ahead
Today’s implementations of public key cryptography are secure because factoring large integers is computationally intensive. Depending on key length, the time to factor (and therefore break) keys ranges from months to forever (more than the projected lifetime of our universe). However, when a quantum computer with enough qubits is available, factoring large integers will become instant and trivial. Defining “enough” turns out to be far beyond what I can cover (or fully understand) in this blog post, and brings in to play the difference between logical and physical qubits, noise rates, error correction, and more!

You need to keep this in mind when thinking about medium-term encryption and data protection, and you need to know about post-quantum cryptography. Today, s2n (our implementation of the TLS/SSL protocols) already includes two different key exchange mechanisms that are quantum-resistant. Given that it takes about a decade for a new encryption protocol to become widely available and safe to use, it is not too soon to look ahead to a time when large-scale quantum computers are available.

Quantum computing is definitely not mainstream today, but that time is coming. It is a very powerful tool that can solve certain types of problems that are difficult or impossible to solve classically. I suspect that within 40 or 50 years, many applications will be powered in part using services that run on quantum computers. As such, it is best to think of them like a GPU or a math coprocessor. They will not be used in isolation, but will be an important part of a hybrid classical/quantum solution.

Here We Are
Our goal is to make sure you know enough about quantum computing to start looking for some appropriate use cases and conducting some tests and experiments. We want to build a solid foundation that is firmly rooted in reality, and to work with you to move into a quantum-powered future.

Ok, with that as an explanation, let’s get into it!

Amazon Braket
This new service is designed to let you get some hands-on experience with qubits and quantum circuits. You can build and test your circuits in a simulated environment and then run them on an actual quantum computer. Amazon Braket is a fully managed AWS service, with security & encryption baked in at each level.

You can access Amazon Braket through a notebook-style interface:

The Python code makes use of the Amazon Braket SDK. You can create a quantum circuit with a single line of code (this is, according to my colleagues, a “maximally entangled Bell state between qubit 0 and qubit 1”):

bell = Circuit().h(0).cnot(0, 1)

And run it with another:

print(device.run(bell, s3_folder).result().measurement_counts())

In addition to the classically-powered simulation environment, Amazon Braket provides access to quantum computers from D-Wave, IonQ, and Rigetti. These devices have a couple of things in common: they are leading-edge tech, they are expensive to build and run, and they generally operate in a very extreme and specialized environment (supercooled or near-vacuum) that must be kept free of electrical, thermal, and magnetic noise. Taken together, I think it is safe to say that most organizations will never own a quantum computer, and will find the cloud-based on-demand model a better fit. It may well be the case that production-scale quantum computers are the first cloud-only technology.

The actual quantum computers are works of art, and I am happy to be able to share some cool pictures. Here’s the D-Wave 2000Q:

The Rigetti 16Q Aspen-4:

And the IonQ linear ion trap:

AWS Center for Quantum Computing
As I noted earlier, quantum computing is still a very young field; there’s a lot that we don’t know, and plenty of room for scientific and technological breakthroughs.

I am pleased to announce that we are forming the AWS Center for Quantum Computing. Located adjacent to the Caltech campus, our goal is to bring the world’s top talent together in order to accelerate development. We will be researching technology that might one day enable quantum computers to be mass-produced, while also working to identify applications that are best solved on quantum computers. Both of these are long-term challenges, and I look forward to watching the progress over the next decade or two.

Amazon Quantum Solutions Lab
We understand that this is a new and intriguing technology, and we know that you want to learn, build your skills, and make some plans to put quantum computing to use.

The Amazon Quantum Solutions Lab will allow you to tap into our own expertise and that of our consulting partners. Our goal is to work with you to find those practical uses, and to help you to build up your own “bench” of qualified quantum developers.

You will also be able to take advantage of research and collaboration opportunities at the Quantum Solutions Lab.

Quantum Computing Resources
Here are some of the reference materials that you might find useful. Some of this will make your head spin, but if I can understand even a little bit of it, then so can you:

The Quantum Computing Party Hasn’t Even Started Yet – A very gentle overview of the field.

Wikipedia – Quantum Computing – A good summary, with lots of links and diagrams.

How Quantum Computers Break Encryption | Shor’s Algorithm Explained – Helpful video. Skip ahead to 8:03 if you want the TL;DR.

Quantum Computation and Quantum Information – The definitive (so they say) textbook on the subject.

Quantum Computing for the Determined – A series of 22 short explanatory videos, starting with The Qubit.

Quantum Computing for the Very Curious – A long-form article by the author of the preceding videos.

Quantum Computing Expert Explains One Concept in 5 Levels of Difficulty – Like the title says, quantum computing explained to 5 different people.

Quantum Supremacy Using a Programmable Supercomputing Processor – An important result, and a major milestone that shows how a quantum computer can outperform a classical one for a particular type of problem. Be sure to read Scott Aaronson’s Supreme Quantum Supremacy FAQ as well.

This is What a 50-qubit Quantum Computer Looks Like – A stunning photo-essay of IBM’s 50-qubit computer.

Shtetl-Optimized – Professor Scott Aaronson has been researching, writing, and blogging about quantum computing for a very long time.

Jeff;

 

AWS launches Braket, its quantum computing service

The content below is taken from the original ( AWS launches Braket, its quantum computing service), to continue reading please visit the site. Remember to respect the Author & Copyright.

While Google, Microsoft, IBM and others have made a lot of noise around their quantum computing efforts in recent months, AWS remained quiet. The company, after all, never had its own quantum research division. Today, though, AWS announced the preview launch of Braket (named after the common notation for quantum states), its own quantum computing service. It’s not building its own quantum computer, though. Instead, it’s partnering with D-Wave, IonQ and Rigetti and making their systems available through its cloud. In addition, it’s also launching the AWS Center for QuantumComputing and AWS QuantumSolutions Lab. Lab.

With Braket, developers can get started on building quantum algorithms and basic applications and then test them in simulations on AWS, as well as the quantum hardware from its partners. That’s a smart move on AWS’s part since it’s hedging its bets without incurring the cost of trying to build a quantum computer itself. And for its partners, AWS provides them with the kind of reach that would be hard to achieve otherwise. Developers and researchers, on the other hand, get access to all of these tools through a single interface, making it easier for them to figure out what works best for them.

“By collaborating with AWS, we will be able to deliver access to our systems to a much broader market and help accelerate the growth of this emerging industry,” said Chad Rigetti, founder and CEO of Rigetti Computing .

D-Wave offered a similar statement. “D-Wave’s quantum systems and our Leap cloud environment were both purpose-built to make practical application development a reality today and, in turn, fuel real-world business advantage for our customers,” said D-Wave’s chief product officer and EVP of R&D, Alan Baratz. “Amazon’s Braket will open the door to more smart developers who will build the quantum future, and the forward-thinking executives who will transform industries.”

Braket provides developers with a standard, fully-managed Jupyter notebook environment for exploring their algorithms. The company says it will offer plenty of pre-installed developer tools, sample algorithms and tutorials to help new users get started with both hybrid and classical quantum algorithms.

With its new Solutions Lab, AWS will also provide researchers with a soltion for collaborating around this new technology. “Amazon Quantum Solutions Lab engagements are collaborative research programs that allow you to work with leading experts in quantum computing, machine learning, and high-performance computing. The programs help you research and identify the most promising applications of quantum computing for your business and get quantum ready,” the company explains.

With its research center for quantum computing, Amazon is starting to do some long-term research as well, though. As is so often the case with AWS, though, I think the focus here is on making the technology accessible to developers more so than on doing basic research.

“We believe that quantum computing will be a cloud-first technology and that the cloud will be the main way customers access the hardware,”  said Charlie Bell, Senior Vice President, Utility Computing Services, AWS. “With our Amazon Braketservice and Amazon Quantum Solutions Lab, we’re making it easier for customers to gain experience using quantum computers and to work with experts from AWS and our partners to figure out how they can benefit from the technology. And with our AWS Center for Quantum Computing and academic partnerships, we join the effort across the scientific and industrial communities to help accelerate the promise of quantum computing.”

Updating…

Tom Pidcock sells old kit to plant 250 trees as part of carbon offsetting plan

The content below is taken from the original ( Tom Pidcock sells old kit to plant 250 trees as part of carbon offsetting plan), to continue reading please visit the site. Remember to respect the Author & Copyright.

Tom Pidcock (Instagram/Getty)

The British rider is conscious of the large carbon footprint his job necessitates

RISC-V business: Tech foundation moving to Switzerland because of geopolitical concerns

The content below is taken from the original ( RISC-V business: Tech foundation moving to Switzerland because of geopolitical concerns), to continue reading please visit the site. Remember to respect the Author & Copyright.

Unanimous decision of board to up sticks from Delaware

The RISC-V Foundation, which directs the development of an open-source instruction set architecture for CPUs, will incorporate in Switzerland. Currently it is a non-stock corporation in Delaware, USA.…

Tracking Anonymous Access to SharePoint and OneDrive Documents

The content below is taken from the original ( Tracking Anonymous Access to SharePoint and OneDrive Documents), to continue reading please visit the site. Remember to respect the Author & Copyright.


Understanding Office 365 Sharing

Over the last few months, I’ve looked at various aspects of how guest users gain access to resources within Office 365 tenants and the information tenant administrators can use to track that access. We’ve considered the mechanics of SharePoint Online sharing, how to report Office 365 Groups and Teams with guests in their membership, and how to use the Office 365 audit log to discover the documents accessed by guests. In my last article in this area, I reviewed how to find out who creates guest accounts, including when a guest account is created because someone shares a document in a SharePoint Online or OneDrive for Business site.

Sharing via Cloudy Attachments

Hopefully the articles have helped throw some light into how to manage guest access to resources. To complete the picture, I want to look at the links created by Outlook when users add a “cloudy attachment” to email. These attachments are links to SharePoint Online or OneDrive for Business documents, with the idea being that it is better for recipients to access the document in situ instead of a private copy.

Cloudy attachments work very well. However, the link sent to recipients allows anonymous access to the document. In other words, anyone with the link can access the document. This isn’t a huge deal even if the message is forwarded because it replicates how regular attachments work. This situation is due to change when Outlook adopts the standard sharing link control for Office 365, but it’s what happens today.

Tenant administrators can track access to other shared documents. What I wanted to find out is how to discover the documents being shared via email and the actions taken against those documents.

Finding Anonymous Access Audit Events

Once again, the combination of Office 365 audit log and PowerShell gave the answer. The solution came in two parts: first, find out when anonymous links are used. Next, find out what happens to the document afterwards. For instance, did the recipient modify or download the document.

The first part is solved by searching the audit log for AnonymousLinkUsed operations. Office 365 captures these records when a recipient opens a document using an anonymous link, whether the link was sent as a cloudy attachment or when someone generates an “Anyone with the link can view” or “Anyone with the link can edit” share from SharePoint Online or OneDrive for Business.

Because we’re dealing with anonymous access, details of the user who uses the link are not logged, but their IP address is. We can therefore use that IP address to track subsequent actions by searching the audit log again for operations like FileDownloaded that took place within seven days of the link being used. Seven days is an arbitrary period chosen by me on the basis that if something doesn’t happen within that time, it’s probably not interesting.

Finding Actions by IP Address

After finding the second set of records, we filter them to look for records associated with anonymous access based on the SharePoint identifier assigned to the anonymous access. This is a value like urn:spo:anon#f93ba91b9fcff445a167b15625c3fd3fbfd98fc46e669ea1f676f1e366e77794 generated by SharePoint to identify the anonymous access through the link.

Outputting for Further Analysis

Once we’ve done our filtering, slicing, and dicing, we can output the data in something that makes further analysis easy. My go-to format is to export the data to CSV and use Excel or Power BI, but you can also browse the information in a grid by piping it to the Out-GridView cmdlet (Figure 1).

Figure 1: Anonymous access to SharePoint and OneDrive documents (image credit: Tony Redmond)

The Script

Here’s the PowerShell script to generate the data for analysis. You need to connect to Exchange Online to use the Search-UnifiedAuditLog cmdlet.

# Find out when an anonymous link is used by someone outside an Office 365 tenant to access SharePoint Online and OneDrive for Business documents
$StartDate = (Get-Date).AddDays(-90); $EndDate = (Get-Date) #Maximum search range for audit log for E3 users
CLS; Write-Host "Searching Office 365 Audit Records to find anonymous sharing activity"
$Records = (Search-UnifiedAuditLog -Operations AnonymousLinkUsed -StartDate $StartDate -EndDate $EndDate -ResultSize 1000)
If ($Records.Count -eq 0) {
    Write-Host "No anonymous share records found." }
Else {
    Write-Host "Processing" $Records.Count "audit records..."
    $Report = @() # Create output file for report
    # Scan each audit record to extract information
    ForEach ($Rec in $Records) {
      $AuditData = ConvertFrom-Json $Rec.Auditdata
      $ReportLine = [PSCustomObject][Ordered]@{
      TimeStamp = Get-Date($AuditData.CreationTime) -format g
      User      = $AuditData.UserId
      Action    = $AuditData.Operation
      Object    = $AuditData.ObjectId
      IPAddress = $AuditData.ClientIP
      Workload  = $AuditData.Workload
      Site      = $AuditData.SiteUrl
      FileName  = $AuditData.SourceFileName 
      SortTime  = $AuditData.CreationTime }
    $Report += $ReportLine }
  # Now that we have parsed the information for the link used audit records, let's track what happened to each link
  $RecNo = 0; CLS; $TotalRecs = $Report.Count
  ForEach ($R in $Report) {
     $RecNo++
     $ProgressBar = "Processing audit records for " + $R.FileName + " (" + $RecNo + " of " + $TotalRecs + ")" 
     Write-Progress -Activity "Checking Sharing Activity With Anonymous Links" -Status $ProgressBar -PercentComplete ($RecNo/$TotalRecs*100)
     $StartSearch = $R.TimeStamp; $EndSearch = (Get-Date $R.TimeStamp).AddDays(+7) # We'll search for any audit records 
     $AuditRecs = (Search-UnifiedAuditLog -StartDate $StartSearch -EndDate $EndSearch -IPAddresses $R.IPAddress -Operations FileAccessedExtended, FilePreviewed, FileModified, FileAccessed, FileDownloaded -ResultSize 100)
     Foreach ($AuditRec in $AuditRecs) {
       If ($AuditRec.UserIds -Like "*urn:spo:*") { # It's a continuation of anonymous access to a document
          $AuditData = ConvertFrom-Json $AuditRec.Auditdata
          $ReportLine = [PSCustomObject][Ordered]@{
            TimeStamp = Get-Date($AuditData.CreationTime) -format g
            User      = $AuditData.UserId
            Action    = $AuditData.Operation
            Object    = $AuditData.ObjectId
            IPAddress = $AuditData.ClientIP
            Workload  = $AuditData.Workload
            Site      = $AuditData.SiteUrl
            FileName  = $AuditData.SourceFileName 
            SortTime  = $AuditData.CreationTime }}
         $Report += $ReportLine }
}}
$Report | Sort FileName, IPAddress, User, SortTime | Export-CSV -NoTypeInformation "c:\Temp\AnonymousLinksUsed.CSV"
Write-Host "All done. Output file is available in c:\temp\AnonymousLinksUsed.Csv"
$Report | Sort FileName, IPAddress, User, SortTime -Unique | Select Timestamp, Action, Filename, IPaddress, Workload, Site | Out-Gridview

As usual, I don’t guarantee the code. All I can say is that it works for me.

Sharing is Caring

It’s great to be able to share so easily in so many ways with so many people outside your Office 365 tenant. It’s even better when you know how that sharing happens.

The post Tracking Anonymous Access to SharePoint and OneDrive Documents appeared first on Petri.

Amazon straightens up its IoT house, complete with virtual Alexa, ahead of Las Vegas shindig

The content below is taken from the original ( Amazon straightens up its IoT house, complete with virtual Alexa, ahead of Las Vegas shindig), to continue reading please visit the site. Remember to respect the Author & Copyright.

Coffee machines will listen to you if vendors implement it

AWS has unveiled a flurry of updates to its IoT platform, including secure tunnelling, fleet provisioning, Docker containers on edge devices, and Alexa voice support on devices with 50 per cent less power than was previously required.…

Getting Started with Azure Arc-Servers

The content below is taken from the original ( Getting Started with Azure Arc-Servers), to continue reading please visit the site. Remember to respect the Author & Copyright.

One of the biggest announcements (in my opinion) at Microsoft Ignite was the release of Azure Arc. This new Azure service stands to be a game-changer as it relates to managing any and all of your hybrid environments. In this article, we’ll set up Azure Arc to manage our on-prem workloads. Now, to be clear, Azure Arc was just released to public preview. With that being said, understand it has limited capabilities in it’s infancy. Be sure to keep an eye on added features as it matures. While at Ignite, I was speaking with one of the product team members and he said features will be coming fast and often.

What is Azure Arc?

If you don’t know what Azure Arc is, here’s a quick summary. Azure Arc provides the ability to manage your workloads, regardless of where they live within this single dashboard. You can add servers from your own data center or any other cloud platform. Along with servers, Azure Arc can also be used for data services using Kubernetes. This article will focus on server management. Azure Arc brings Azure cloud services to these workloads. Services such as Role-Based Access Control, Azure Policy and Azure Resource Manager, with more on the way.

Adding On-Premises Servers to Azure Arc

Getting your on-prem servers to appear in the Azure Arc portal is pretty straight forward. First, we need to make sure we have a few things checked off before we dive in.

Required Resource Providers

We need to register two resource providers to use Azure Arc for Servers.
Microsoft.HybridComputer
Microsoft.GuestConfiguration

This can be done in the portal, through PowerShell or Azure CLI. We’ll be using PowerShell in this example.

Login-AzAccount
Register-AzResourceProvider -ProviderNamespace Microsoft.HybridCompute
Register-AzResourceProvider -ProviderNamespace Microsoft.GuestConfiguration

Once that’s taken care of, we can verify our settings using PowerShell as well.

get-azresourceprovider | where {$_.providernamespace -like '*hybridcompute*'} 
get-azresourceprovider | where {$_.providernamespace -like '*guestconfiguration*'}

And we can see whether or not the resource providers are registered.

We can also verify in the portal as well.

  1. Click on Subscriptions.
  2. Choose your subscription.
  3. Under settings, select Resource providers.
  4. Using the filter by name option and locate GuestConfiguration and HybridComputer

GuestConfiguration resource provider seen below.

HybridCompute resource provider seen below.

Adding Servers

Once we have that complete, we can add our servers by completing the following within the Azure portal.

  1. Type Azure Arc in the Search resources box at the top of the port and press Enter (see below).
  2. Select Manage Servers while in the Azure Arc portal.
  3. In the Machines screen. Click + Add to add a server.
  4. Choose Generate Script in the Add machines using interactive script.
  5. Select your subscription, resource group (or create new) and region.
  6. Choose the operating system your on-premises workload is running, either Windows or Linux. We’ll choose Windows.
  7. Click Review and generate script.
  8. At this point, you can either download or copy the script.

After you have either downloaded or copied the script. You’ll need to run these commands on the server you want managed with Azure Arc. The script downloads a lightweight agent and installs it on the server which in turn associates with your subscription. Your server should appear in the Azure Arc portal after several minutes.

If you’re on-boarding a Linux server, you’ll have to copy the commands from the portal and execute on the server. They do not provide a script to download. The Linux commands perform the same process as the Windows commands by downloading and installing a lightweight agent on the server.

Once the server appears in Azure Arc, you have the ability to assign tags, apply Azure Policy as well as manage access through Role Based Access Control (RBAC) via Azure Active Directory.

Final Thoughts

Azure Arc is in Public Preview. This means we haven’t seen how far they intend on taking this new cross-platfrom hybrid management service. The assurances from the product team lead me to believe we’ll be able to leverage most Azure services we use for our Azure VM’s sooner rather than later. Let’s be honest, the companies that are 100% in the cloud and only one cloud are rare. Managing on-prem, AWS and Azure instances is cumbersome. The idea of having a single platform-agnostic tool to manage them all is really a game changer. Truly, one ring to rule them all. For further information regarding Azure Arc for servers, check out the Microsoft Docs site here.

The post Getting Started with Azure Arc-Servers appeared first on Petri.

Over 50 products discontinued by Microsoft

The content below is taken from the original ( Over 50 products discontinued by Microsoft), to continue reading please visit the site. Remember to respect the Author & Copyright.

3D Movie Maker

3D Movie MakerMicrosoft sometimes discontinues or retires its products without providing enough information as to why they are doing it. There were some good products that were still in demand, but Microsoft refused to continue them. This post looks at discontinued as […]

This post Over 50 products discontinued by Microsoft is from TheWindowsClub.com.

AWS expands its IoT services, brings Alexa to devices with only 1MB of RAM

The content below is taken from the original ( AWS expands its IoT services, brings Alexa to devices with only 1MB of RAM), to continue reading please visit the site. Remember to respect the Author & Copyright.

AWS today announced a number of IoT-related updates that, for the most part, aim to make getting started with its IoT services easier, especially for companies that are trying to deploy a large fleet of devices. The marquee announcement, however, is about the Alexa Voice Service, which makes Amazon’s Alex voice assistant available to hardware manufacturers who want to build it into their devices. These manufacturers can now create “Alexa built-in” devices with very low-powered chips and 1MB of RAM.

Until now, you needed at least 100MB of RAM and an ARM Cortex A-class processor. Now, the requirement for Alexa Voice Service integration for AWS IoT Core has come down 1MB and a cheaper Cortex-M processor. With that, chances are you’ll see even more lightbulbs, light switches and other simple, single-purpose devices with Alexa functionality. You obviously can’t run a complex voice-recognition model and decision engine on a device like this, so all of the media retrieval, audio decoding, etc. is done in the cloud. All it needs to be able to do is detect the wake word to start the Alexa Alex functionality, which is a comparably simple model.

“We now offload the vast majority of all of this to the cloud,” AWS IoT VP Dirk Didascalou told me. “So the device can be ultra dumb. The only thing that the device still needs to do is wake word detection. That still needs to be covered on the device.” Didascalou noted that with new, lower-powered processors from NXP and Qualcomm, OEMs can reduce their engineering bill of materials by up to 50 percent, which will only make this capability more attractive to many companies.

Didascalou believes we’ll see manufacturers in all kinds of areas use this new functionality, but most of it will likely be in the consumer space. “It just opens up the what we call the real ambient intelligence and ambient computing space,” he said. “Because now you don’t need to identify where’s my hub — you just speak to your environment and your environment can interact with you. I think that’s a massive step towards this ambient intelligence via Alexa.”

No cloud computing announcement these days would be complete without talking about containers. Today’s container announcement for AWS’ IoT services is that IoT Greengrass, the company’s main platform for extending AWS to edge devices, now offers support for Docker containers. The reason for this is pretty straightforward. The early idea of Greengrass was to have developers write Lambda functions for it. But as Didascalou told me, a lot of companies also wanted to bring legacy and third-party applications to Greengrass devices, as well as those written in languages that are not currently supported by Greengrass. Didascalou noted that this also means you can bring any container from the Docker Hub or any other Docker container registry to Greengrass now, too.

“The idea of Greengrass was, you build an application once. And whether you deploy it to the cloud or at the edge or hybrid, it doesn’t matter, because it’s the same programming model,” he explained. “But very many older applications use containers. And then, of course, you saying, okay, as a company, I don’t necessarily want to rewrite something that works.”

Another notable new feature is Stream Manager for Greengrass. Until now, developers had to cobble together their own solutions solution for managing data streams from edge devices, using Lambda functions. Now, with this new feature, they don’t have to reinvent the wheel every time they want to build a new solution for connection management and data retention policies, etc., but can instead rely on this new functionality to do that for them. It’s pre-integrated with AWS Kinesis and IoT Analytics, too.

Also new for AWS IoT Greengrass are fleet provisioning, which makes it easier for businesses to quickly set up lots of new devices automatically, as well as secure tunneling for AWS IoT Device Management, which makes it easier for developers to remote access into a device and troubleshoot them. In addition, AWS IoT Core now features configurable endpoints.

Free For Developers

The content below is taken from the original ( Free For Developers), to continue reading please visit the site. Remember to respect the Author & Copyright.

A twitter follower tweeted this, it might be useful.

https://free-for.dev/#/

enjoy it!

submitted by /u/Michael_andreuzza to r/webdev
[link] [comments]

Introducing Flan Scan: Cloudflare’s Lightweight Network Vulnerability Scanner

The content below is taken from the original ( Introducing Flan Scan: Cloudflare’s Lightweight Network Vulnerability Scanner), to continue reading please visit the site. Remember to respect the Author & Copyright.

Introducing Flan Scan: Cloudflare’s Lightweight Network Vulnerability Scanner

Introducing Flan Scan: Cloudflare’s Lightweight Network Vulnerability Scanner

Today, we’re excited to open source Flan Scan, Cloudflare’s in-house lightweight network vulnerability scanner. Flan Scan is a thin wrapper around Nmap that converts this popular open source tool into a vulnerability scanner with the added benefit of easy deployment.

We created Flan Scan after two unsuccessful attempts at using “industry standard” scanners for our compliance scans. A little over a year ago, we were paying a big vendor for their scanner until we realized it was one of our highest security costs and many of its features were not relevant to our setup. It became clear we were not getting our money’s worth. Soon after, we switched to an open source scanner and took on the task of managing its complicated setup. That made it difficult to deploy to our entire fleet of more than 190 data centers.

We had a deadline at the end of Q3 to complete an internal scan for our compliance requirements but no tool that met our needs. Given our history with existing scanners, we decided to set off on our own and build a scanner that worked for our setup. To design Flan Scan, we worked closely with our auditors to understand the requirements of such a tool. We needed a scanner that could accurately detect the services on our network and then lookup those services in a database of CVEs to find vulnerabilities relevant to our services. Additionally, unlike other scanners we had tried, our tool had to be easy to deploy across our entire network.

We chose Nmap as our base scanner because, unlike other network scanners which sacrifice accuracy for speed, it prioritizes detecting services thereby reducing false positives. We also liked Nmap because of the Nmap Scripting Engine (NSE), which allows scripts to be run against the scan results. We found that the “vulners” script, available on NSE, mapped the detected services to relevant CVEs from a database, which is exactly what we needed.

The next step was to make the scanner easy to deploy while ensuring it outputted actionable and valuable results. We added three features to Flan Scan which helped package up Nmap into a user-friendly scanner that can be deployed across a large network.

  • Easy Deployment and ConfigurationTo create a lightweight scanner with easy configuration, we chose to run Flan Scan inside a Docker container. As a result, Flan Scan can be built and pushed to a Docker registry and maintains the flexibility to be configured at runtime. Flan Scan also includes sample Kubernetes configuration and deployment files with a few placeholders so you can get up and scanning quickly.
  • Pushing results to the Cloud Flan Scan adds support for pushing results to a Google Cloud Storage Bucket or an S3 bucket. All you need to do is set a few environment variables and Flan Scan will do the rest. This makes it possible to run many scans across a large network and collect the results in one central location for processing.
  • Actionable Reports – Flan Scan generates actionable reports from Nmap’s output so you can quickly identify vulnerable services on your network, the applicable CVEs, and the IP addresses and ports where these services were found. The reports are useful for engineers following up on the results of the scan as well as auditors looking for evidence of compliance scans.
Introducing Flan Scan: Cloudflare’s Lightweight Network Vulnerability Scanner
Sample run of Flan Scan from start to finish. 

How has Scan Flan improved Cloudflare’s network security?

By the end of Q3, not only had we completed our compliance scans, we also used Flan Scan to tangibly improve the security of our network. At Cloudflare, we pin the software version of some services in production because it allows us to prioritize upgrades by weighing the operational cost of upgrading against the improvements of the latest version. Flan Scan’s results revealed that our FreeIPA nodes, used to manage Linux users and hosts, were running an outdated version of Apache with several medium severity vulnerabilities. As a result, we prioritized their update. Flan Scan also found a vulnerable instance of PostgreSQL leftover from a performance dashboard that no longer exists.

Flan Scan is part of a larger effort to expand our vulnerability management program. We recently deployed osquery to our entire network to perform host-based vulnerability tracking. By complementing osquery’s findings with Flan Scan’s network scans we are working towards comprehensive visibility of the services running at our edge and their vulnerabilities. With two vulnerability trackers in place, we decided to build a tool to manage the increasing number of vulnerability  sources. Our tool sends alerts on new vulnerabilities, filters out false positives, and tracks remediated vulnerabilities. Flan Scan’s valuable security insights were a major impetus for creating this vulnerability tracking tool.

How does Flan Scan work?

Introducing Flan Scan: Cloudflare’s Lightweight Network Vulnerability Scanner

The first step of Flan Scan is running an Nmap scan with service detection. Flan Scan’s default Nmap scan runs the following scans:

  1. ICMP ping scan – Nmap determines which of the IP addresses given are online.
  2. SYN scan – Nmap scans the 1000 most common ports of the IP addresses which responded to the ICMP ping. Nmap marks ports as open, closed, or filtered.
  3. Service detection scan – To detect which services are running on open ports Nmap performs TCP handshake and banner grabbing scans.

Other types of scanning such as UDP scanning and IPv6 addresses are also possible with Nmap. Flan Scan allows users to run these and any other extended features of Nmap by passing in Nmap flags at runtime.

Introducing Flan Scan: Cloudflare’s Lightweight Network Vulnerability Scanner
Sample Nmap output

Flan Scan adds the “vulners” script tag in its default Nmap command to include in the output a list of vulnerabilities applicable to the services detected. The vulners script works by making API calls to a service run by vulners.com which returns any known vulnerabilities for the given service.

Introducing Flan Scan: Cloudflare’s Lightweight Network Vulnerability Scanner
Sample Nmap output with Vulners script

The next step of Flan Scan uses a Python script to convert the structured XML of Nmap’s output to an actionable report. The reports of the previous scanner we used listed each of the IP addresses scanned and present the vulnerabilities applicable to that location. Since we had multiple IP addresses running the same service, the report would repeat the same list of vulnerabilities under each of these IP addresses. This meant scrolling back and forth on documents hundreds of pages long to obtain a list of all IP addresses with the same vulnerabilities.  The results were impossible to digest.

Flan Scans results are structured around services. The report enumerates all vulnerable services with a list beneath each one of relevant vulnerabilities and all IP addresses running this service. This structure makes the report shorter and actionable since the services that need to be remediated can be clearly identified. Flan Scan reports are made using LaTeX because who doesn’t like nicely formatted reports that can be generated with a script? The raw LaTeX file that Flan Scan outputs can be converted to a beautiful PDF by using tools like pdf2latex or TeXShop.

Introducing Flan Scan: Cloudflare’s Lightweight Network Vulnerability Scanner
Sample Flan Scan report

What’s next?

Cloudflare’s mission is to help build a better Internet for everyone, not just Internet giants who can afford to buy expensive tools. We’re open sourcing Flan Scan because we believe it shouldn’t cost tons of money to have strong network security.

You can get started running a vulnerability scan on your network in a few minutes by following the instructions on the README. We welcome contributions and suggestions from the community.

How to use multiple WhatsApp accounts on Windows using Altus

The content below is taken from the original ( How to use multiple WhatsApp accounts on Windows using Altus), to continue reading please visit the site. Remember to respect the Author & Copyright.

WhatsApp has become an integral part of everyday communication with friends, family, colleagues, etc. If you spend most of the time with your computer, and you have multiple WhatsApp accounts, you can check out this free software called Altus. Altus […]

This post How to use multiple WhatsApp accounts on Windows using Altus is from TheWindowsClub.com.

Introducing Azure Cost Management for partners

The content below is taken from the original ( Introducing Azure Cost Management for partners), to continue reading please visit the site. Remember to respect the Author & Copyright.

As a partner, you play a critical role in successful planning and managing long-term cloud implementations for your customers. While the cloud grants the flexibility to scale the cloud infrastructure to the changing needs, it does become challenging to control the spend when cloud costs can fluctuate dramatically with demand. This is where Azure Cost Management comes in to help you track and control cloud cost, prevent overspending and increase predictability for your cloud costs

Announcing general availability of Azure Cost Management for all cloud solution partners (CSPs) who have onboarded their customers to the new Microsoft Customer agreement. With this update, partners and their customers can take advantage of Azure Cost Management tools available to manage cloud spend, similar to the cost management capabilities available for pay-as-you-go (PAYG) and enterprise customers today.

This is the first of the periodic updates to enable cost management support for partners that enables partners to understand, analyze, dissect and manage cost across all their customers and invoices.

With this update, CSPs use Azure Cost Management to:

  • Understand invoiced costs and associate the costs to the customer, subscriptions, resource groups, and services.
  • Get an intuitive view of Azure costs in cost analysis with capabilities to analyze costs by customer, subscription, resource group, resource, meter, service, and many other dimensions.
  • View resource costs that have Partner Earned Credit (PEC) applied in Cost Analysis.
  • Set up notifications and automation using programmatic budgets and alerts when costs exceed budgets.
  • Enable the Azure Resource Manager policy that provides customer access to Cost Management data. Customers can then view consumption cost data for their subscriptions using pay-as-you-go rates.

For more information see, Get Started with Azure Cost Management as a Partner.

Analyze costs by customer, subscription, tags, resource group or resource using cost analysis

Using cost analysis, partners can group by and filter costs by customer, subscription, tags, resource group, resource, and reseller Microsoft partner Network identifier (MP NID), and have increased visibility into costs for better cost control. Partners can also view and manage the costs in the billing currency and in US dollars for billing scopes.

An image showing how you can group and filter costs in cost analysis.

Reconcile cost to an invoice

Partners can reconcile costs by invoice across their customers and their subscriptions to understand the pre-tax costs that contributed to the invoice.

An image showing how cost analysis can help analyze Azure spend to reconcile cost.

You can analyze azure spend for the customers you support and their subscriptions and resources. With this enhanced visibility into the costs of your customers, you can use spending patterns to enforce cost control mechanisms, like budgets and alerts to manage costs with continued and increased accountability.

Enable cost management at retail rates for your customers

In this update, a partner can also enable cost management features, initially at pay-as-you-go rates for your customers and resellers who have access to the subscriptions in the customer’s tenant. As a partner, if you decide to enable cost management for the users with access to the subscription, they will have the same capabilities to analyze the services they consume and set budgets to control costs that are computed at pay-as-you-go prices for Azure consumed services. This is just the first of the updates and we have features planned in the first half of 2020 to enable cost management for customers at prices that partner can set by applying a markup on the pay-as-you-go prices.

Partners can set a policy to enable cost management for users with access to an Azure subscription to view costs at retail rates for a specific customer.

An image showing how partners can set a policy to view costs at retail rates for a specific customer.

If the policy is enabled for subscriptions in the customer’s tenant, users with role-based access control (RBAC) access to the subscription can now manage Azure consumption costs at retail prices.

An image showing how customers with RBAC access can manage Azure consumption at retail prices.

Set up programmatic budgets and alerts to automate and notify when costs exceed threshold

As a partner, you can set up budgets and alerts to send notifications to specified email recipients when the cost threshold is exceeded. In the partner tenant, you can set up budgets for costs as invoiced to the partner. You can also set up monthly, quarterly, or annual budgets across all your customers, or for a specific customer, and filter by subscription, resource, reseller MPN ID, or resource group.

An image showing how you can set up budgets and alerts.

Any user with RBAC access to a subscription or resource group can also set up budgets and alerts for Azure consumption costs at retail rates in the customer tenant if the policy for cost visibility has been enabled for the customer.

An image showing how users can create budgets.

When a budget is created for a subscription or resource group in the customer tenant, you can also configure it to call an action group. The action group can perform a variety of different actions when your budget threshold is met. For more information about action groups, see Create and manage action groups in the Azure portal. For more information about using budget-based automation with action groups, see Manage costs with Azure budgets.

All the experiences that we provide in Azure Cost Management natively are also available as REST APIs for enabling automated cost management experiences.

Coming soon

  • We will be enabling cost recommendation and optimization suggestions, for better savings and efficiency in managing Azure costs.
  • We will launch Azure Cost Management at retail rates for customers who are not on the Microsoft Customer Agreement and are supported by CSP partners.
  • Showback features that enable partners to charge a markup on consumption costs are also being planned for 2020.

Try Azure Cost Management for partners today! It is natively available in the Azure portal for all partners who have onboarded customers to the new Microsoft Customer Agreement.

Become a certified IoT business leader

The content below is taken from the original ( Become a certified IoT business leader), to continue reading please visit the site. Remember to respect the Author & Copyright.

IDG Insider Pro and CertNexus are partnering to offer business leaders a course and associated credential to boost collaboration and drive informed IoT business decisions.

Smart Compose is coming to Google Docs

The content below is taken from the original ( Smart Compose is coming to Google Docs), to continue reading please visit the site. Remember to respect the Author & Copyright.

At its Cloud Next event in London, Google Cloud CEO Thomas Kurian today announced that Smart Compose, the AI-powered feature that currently tries to complete phrases and sentences for you in Gmail, is also coming to G Suite’s Google Docs soon. For now, though, your G Suite admin has to sign up for the beta to try it and it’s only available in English.

Google says in total, Smart Compose in Gmail already saves people from typing about 2 billion characters per week. At least in my own experience, it also works surprisingly well and has only gotten better since launch (as one would expect from a product that learns from the individual and collective behavior of its users). It remains to be seen how well this same technique works for longer texts, but even longer documents are often quite formulaic, so the algorithm should still work quite well there, too.

Google first announced Smart Compose in May 2018, as part of its I/O developer conference. It builds upon the same machine learning technology Google developed for its Smart Reply feature. The company then rolled out Smart Compose to all G Suite and private Gmail users, starting in July 2018, and later added support for mobile, too.