I build a browser extension for AWS

The content below is taken from the original ( I build a browser extension for AWS), to continue reading please visit the site. Remember to respect the Author & Copyright.

Hey there AWS users,

I read this blog post the other day. [Blog Post](https://www.expeditedssl.com/aws-in-plain-english) While reading it I thought, How awesome would it be to have an extension replace those confusing AWS names with the names written up in that post. So I decided I’d build it.

The extension is publicly available on Firefox and maybe later on the chrome store as well.

You can check it out here :

Github: https://github.com/nigelbarink/AWS-in-plain-english

Firefox Add-ons: https://addons.mozilla.org/en-US/firefox/addon/aws-in-plain-english/

submitted by /u/Niigel_cyborking to r/aws
[link] [comments]

Fund Your Favorite Open-Source Projects Using Github’s New Sponsorship Program

The content below is taken from the original ( Fund Your Favorite Open-Source Projects Using Github’s New Sponsorship Program), to continue reading please visit the site. Remember to respect the Author & Copyright.

We’re big fans of open source software and the ethos of freedom, security, and transparency that often drives such projects. But software development and upkeep are not cheap, and even the most capable open source developers need help now and again. Luckily, Github just made it a lot easier to directly support…

Read more…

Informal meeting in London – 20th May

The content below is taken from the original ( Informal meeting in London – 20th May), to continue reading please visit the site. Remember to respect the Author & Copyright.

The next meeting of the RISC OS User Group of London (ROUGOL) will take place on Monday, 20th May – tomorrow evening – and this month rather than a formal speaker, the meeting will take the form of an open […]

SharePoint Page Templates Have Finally Arrived, Here’s How to Use Them

The content below is taken from the original ( SharePoint Page Templates Have Finally Arrived, Here’s How to Use Them), to continue reading please visit the site. Remember to respect the Author & Copyright.


Since the introduction of SharePoint news, for the modern SharePoint experience, our customers have been asking for the ability to create templates. Unfortunately, this option wasn’t available. There was a workaround though.

You could create a copy of an existing news post:

The major downside of this workaround is usability. Business users have to browse to an existing news post, that’s not always easy to find, to create a duplicate. Business users want to create a news post, based upon a template, from the add a news post experience. You also have to realize it’s not easy, for the less technical folks, to build a news post with all the available web parts and sections. Although the experience can be user-friendly, it’s doesn’t implicate everyone is able to create beautiful news pages.

Template will finally solve this problem and they are starting to become available. This new SharePoint feature is, for now, only available for Targeted Release tenants. Let’s take a closer look at the new feature.

Three templates

The create a news post menu has a new look & feel:

 

Microsoft provides us with three templates:

  1. Blank
  2. Visual
  3. Basic text

You can create a news post based on one of these three templates. Let’s use the visual template:

 

Imagine, you want to provide your sales department with a template to quickly and easily announce sales-related items. You start with the canvas and build your template; this is easily accomplished by using the out-of-the-box SharePoint web parts. For example:

Are you done? Click on the arrow next to save as draft and click on save as template:

Confirm your new template and you are ready to go! Let’s go back to the home page and create a news post:

 

After completing these steps, the template is now available on the SharePoint site where we created the template. The template is saved within a brand new folder in the site pages library. Guess what’s it called? You guessed right. It’s called templates.

Conclusion

I really like the new template feature. That said, there is one important thing you have to be aware of: templates are created at the site level. There is, at least for now, no SharePoint hub integration available. You would have to create a page template for each site. I am very positive this is high on the priority list for Microsoft. Let’s keep our eyes open with the SharePoint Conference coming up near the end of May.

The post SharePoint Page Templates Have Finally Arrived, Here’s How to Use Them appeared first on Petri.

Arduino Unveils New Nano Family of Boards

The content below is taken from the original ( Arduino Unveils New Nano Family of Boards), to continue reading please visit the site. Remember to respect the Author & Copyright.

It’s Maker Faire Bay Area and that means new flavors of Arduino! The Italian boardmakers announced today a new family of affordable, low-power “Nano” sized microcontroller boards based on powerful Arm Cortex processors that may give you pause before buying another Chinese knockoff — you get Arduino quality starting at […]

Read more on MAKE

The post Arduino Unveils New Nano Family of Boards appeared first on Make: DIY Projects and Ideas for Makers.

Best free Automation software for Windows 10

The content below is taken from the original ( Best free Automation software for Windows 10), to continue reading please visit the site. Remember to respect the Author & Copyright.

Computers have changed the way we live our lives. They have found a place for themselves in every walk of our life. In the recent past, artificial intelligence and machine learning have given way to increased automation. Despite the development, […]

This post Best free Automation software for Windows 10 is from TheWindowsClub.com.

Bash script to create CSV file of all media in a folder (recursively)

The content below is taken from the original ( Bash script to create CSV file of all media in a folder (recursively)), to continue reading please visit the site. Remember to respect the Author & Copyright.

Realizing that some of my media (mostly older files) are in unacceptably poor quality, I was looking for something to scan my media to find said files, so I could know what to attempt to replace going forward. Couldn’t find anything, so I rolled my own. Thought it might be useful to some.

Save script as mi2csv, or whatever you want, in your executable path, and make it executable chmod +x mi2csv

Requires mediainfo to be installed. sudo apt install mediainfo or whatever similar on your system.

Usage:

mi2csv "/path/to/media" "mycsvfile.csv"

Will scan provided media path for all video and audio files and create the given CSV file with following layout, suitable for importing into your favorite spreadsheet application:

filename, ext, size, duration, width, height, fps, aspect "/plex/movies/Catwoman/Catwoman.avi", avi, 1466207024, 01:39:57.680, 720, 304, 25, 2.35:1 

Modifications to these fields should be fairly simple.

Hope someone finds it useful. Comments or recommendations are welcomed.

#!/bin/bash FormatFile="/tmp/inform.txt" FolderRoot="$1" CSVFile="$2" ExtList=".*\.\(mp4\|m4v\|m4p\|mkv\|avi\|asf\|wmv\|flv\|vob\|ts\|mpg\|mpeg\|mts\|m2ts\|webm\|ogv\|mov\|3gp\|3g2\|mp3\|m4a\|ogg\|flac\)" function cleanup () { rm "$FormatFile" } trap "cleanup" EXIT mi=$(which mediainfo) if [ $? -eq 1 ]; then echo "Failed to locate mediainfo on this system. This script won't work without it." exit; fi # Create the Format file used by mediainfo echo 'General;""%CompleteName%"", %FileExtension%, %FileSize%, %Duration/String3%, ' > "$FormatFile" echo 'Video;%Width%, %Height%, %FrameRate%, %DisplayAspectRatio/String%' >> "$FormatFile" # Create CSV file with headers echo "filename, ext, size, duration, width, height, fps, aspect" > "$CSVFile" find "$FolderRoot" -type f -iregex "$ExtList" -exec "$mi" --Inform="file://$FormatFile" {} \; >> "$CSVFile" exit; 

submitted by /u/haz3lnut to r/PleX
[link] [comments]

How to use GoPro as a Security Camera

The content below is taken from the original ( How to use GoPro as a Security Camera), to continue reading please visit the site. Remember to respect the Author & Copyright.

GoPro as a Security Camera

GoPro as a Security CameraGoPro cameras are widely used pocket-sized camera device popular among adventures, surfers, athletes, travelers and boggers for action photography. They are great for rugged use and is ideal for capturing videos when you are in beach, mountain, snow, diving into […]

This post How to use GoPro as a Security Camera is from TheWindowsClub.com.

Windows 10 Release information details, Versions, Known & resolved issues and more

The content below is taken from the original ( Windows 10 Release information details, Versions, Known & resolved issues and more), to continue reading please visit the site. Remember to respect the Author & Copyright.

Windows 10 release information

Windows 10 release informationThe past few years Microsoft has been more transparent than ever. The Redmond giant had published a page which lists down all known issues, date when the issue was resolved, information on when they were resolved, and support time left […]

This post Windows 10 Release information details, Versions, Known & resolved issues and more is from TheWindowsClub.com.

How SunPower is using Google Cloud to create a sustainable business

The content below is taken from the original ( How SunPower is using Google Cloud to create a sustainable business), to continue reading please visit the site. Remember to respect the Author & Copyright.

At Google, we have spent the past 20 years building and expanding our infrastructure to support billions of users and sustainability has been a key consideration throughout this journey. As our cloud business has taken off, we have continued to scale our operations to be the most sustainable cloud in the world. In 2017, we became the first company of our size to match 100% of our annual electricity consumption with purchases of new renewable energy. In fact, we have purchased over 3 gigawatts (GW) of renewables to-date, making us the largest corporate purchaser of renewable energy on the planet.

Our commitment to be the most sustainable cloud provider makes our work with SunPower even more impactful. Working together, we want to make it easy for homeowners and businesses to positively impact our planet.

SunPower makes the world’s most efficient solar panels which are distributed world-wide for residential and commercial customers. Since their beginning in 1985, they have installed over 10 GW of solar panels, which have cumulatively off-set about 40 million metric tons of carbon dioxide. To put that into perspective, that is the same amount of carbon dioxide nine million cars produce in a year.

Even with this impressive progress, rooftop solar design can still be a complicated process:

  • Potential solar buyers spend a significant amount of time online researching solar panels and understanding potential savings is challenging.

  • Once engaged with a provider, the design is a manual, time-intensive process and relies on the identification and understanding of factors unique to each home. These include chimneys or vents, legally-mandated access walkways, and the amount of sunlight exposure for every part of the roof.

At their current pace, SunPower’s solar designers would need over a century to create optimized systems and calculate cost savings for the 100 million future solar homes in the United States. By partnering with Google Cloud, SunPower significantly changed this timeline by developing Instant Design, a technology that allows homeowners and businesses to create their own design in seconds. This technology leverages Google Cloud in three important ways.

  • First, Instant Design uses Google Project Sunroof for access to both satellite and digital surface (DSM) data. By using the 1 petabyte of Sunroof data and imagery around the world, along with SunPower’s database of manually generated designs as a base, Instant Design can easily develop a model through a quick process of training, validation, and analyzing test sets.

  • Second, once SunPower built a satisfactory proof of concept, they leveraged Google Cloud’s AI Platform to iterate and improve upon their machine learning models and  quickly integrate them with their web application.

  • Third, Google Cloud allows the SunPower team to choose the processing power that best fits their needs, and can easily combine technologies for optimal performance. SunPower is using a combination of CPUs, GPUs, and Cloud TPUs to put the “instant” in Instant Design.

Our goal is to help SunPower empower their customers to make the transition to solar panels seamless. With the help of Google Cloud, homeowners can create their own design in seconds, which improves their buying experience, reduces barriers to going solar, and increases solar adoption on a larger scale.

At our Google Cloud Next ‘19 conference last month, Jacob Wachman, vice president of Digital Product and Engineering at SunPower, explained how Instant Design’s use of Google Cloud reflects the best of machine learning by providing applications that can improve the human condition and the health of our environment (see video here). We’re honored that SunPower has partnered with us to develop a technology that can advance our larger goal of a more sustainable future. Instant Design rolls out this summer and we’re excited to continue our work with the SunPower team.

More information on how SunPower is leveraging Google Cloud Platform can be found here. If you’re interested in how we are working with SunPower and other organizations across the globe to build a more sustainable future, check out cloud.google.com/sustainability.

API design: Why you should use links, not keys, to represent relationships in APIs

The content below is taken from the original ( API design: Why you should use links, not keys, to represent relationships in APIs), to continue reading please visit the site. Remember to respect the Author & Copyright.

When it comes to information modeling, how to denote the relation or relationship between two entities is a key question. Describing the patterns that we see in the real world in terms of entities and their relationships is a fundamental idea that goes back at least as far as the ancient Greeks, and is also the foundation of how we think about information in IT systems today.

For example, relational database technology represents relationships using a foreign key, a value stored in one row in a database table that identifies another row, either in a different table or the same table.

 Expressing relationships is very important in APIs too. For example, in a retailer’s API, the entities of the information model might be customers, orders, catalog items, carts, and so on. The API expresses which customer an order is for, or which catalog items are in a cart. A banking API expresses which customer an account belongs to or which account each credit or debit applies to.

The most common way that API developers express relationships is to expose database keys, or proxies for them, in the fields of the entities they expose. However, at least for web APIs, that approach has several disadvantages over the alternative: the web link.

Standardized by the Internet Engineering Task Force (IETF), you can think of a web link as a way of representing relationships on the web. The best-known web links are of course those that appear in HTML web pages expressed using the link or anchor elements, or in HTTP headers. But links can appear in API resources too, and using them instead of foreign keys significantly reduces the amount of information that has to be separately documented by the API provider and learned by the user.

A link is an element in one web resource that includes a reference to another resource along with the name of the relationship between the two resources. The reference to the other entity is written using a special format called Uniform Resource Identifier (URI), for which there is an IETF standard. The standard uses the word ‘resource’ to mean any entity that is referenced by a URI. The relationship name in a link can be thought of as being analogous to the column name of a relational database foreign key column, and the URI in the link is analogous to the foreign key value. By far the most useful URIs are the ones that can be used to get information about the referenced resource using a standard web protocol—such URIs are called Uniform Resource Locators (URLs)—and by far the most important kind of URL for APIs is the HTTP URL.

While links aren’t widely used in APIs, some very prominent web APIs use links based on HTTP URLs to represent relationships, for example the Google Drive API and the GitHub API. Why is that? In this post, I’ll show what using API foreign keys looks like in practice, explain its disadvantages compared to the use of links, and show you how to convert that design to one that uses links.

Representing relationships using foreign keys
Consider the popular pedagogic “pet store” application. The application stores electronic records to track pets and their owners. Pets have attributes like name, species and breed. Owners have names and addresses. Each pet has a relationship to its owner—the inverse of the relationship defines the pets owned by a particular owner.

In a typical key-based design the pet store application’s API makes two resources available that look like this:

Representing relationships using foreign keys.png

The relationship between Lassie and Joe is expressed in the representation of Lassie using the “owner” name/value pair. The inverse of the relationship is not expressed. The “owner” value, “98765,” is a foreign key. It is likely that it really is a database foreign key—that is, it is the value of the primary key of some row in some database table—but even if the API implementation has transformed the key values a bit, it still has the general characteristics of a foreign key.

The value “98765” is of limited direct use to a client. For the most common uses, the client needs to compose a URL using this value, and the API documentation needs to describe a formula for performing this transformation. This is most commonly done by defining a URI template, like this:

/people/{person_id}

The inverse of the relationship—the pets belonging to an owner—can also be exposed in the API by implementing and documenting one of the following URI templates (the difference between the two is a question of style, not substance):

APIs that are designed in this way usually require many URI templates to be defined and documented. The most popular language for documenting these templates for APIs isn’t the one defined in the IETF specification—it’s OpenAPI (formerly known as Swagger). Unfortunately, OpenAPI and similar offerings do not provide a way to specify which field values can be plugged into which templates, so some amount of natural language documentation from the provider or guesswork by the client is also required.

In summary, although this style is common, it requires the provider to document, and the client to learn and use, a significant number of URI templates whose usage is not perfectly described by current API specification languages. Fortunately there’s a better option.

Representing relationships using links
Imagine the resources above were modified to look like this:

Representing relationships using links.png

The primary difference is that the relationships are expressed using links, rather than foreign key values. In these examples, the links are expressed using simple JSON name/value pairs (see the section below for a discussion of other approaches to writing links in JSON).

Note also that the inverse relationship of the pet to its owner has been made explicit by adding the “pets” field to the representation of Joe.

Changing “id” to “self” isn’t really necessary or significant, but it’s a common convention to use “self” to identify the resource whose attributes and relationships are specified by the other name/value pairs in the same JSON object. “self” is the name registered at IANA for this purpose.

Viewed from an implementation point of view, replacing all the database keys with links is a fairly simple change—the server converted the database foreign keys into URLs so the client didn’t have to—but it significantly simplifies the API and reduces the coupling of the client and the server. Many URI templates that were essential for the first design are no longer required and can be removed from the API specification and documentation.

The server is now free to change the format of new URLs at any time without affecting clients (of course, the server must continue to honor all previously-issued URLs). The URL passed out to the client by the server will have to include the primary key of the entity in a database plus some routing information, but because the client just echoes the URL back to the server and the client is never required to parse the URL, clients do not have to know the format of the URL. This reduces coupling between the client and server. Servers can even obfuscate their URLs with base64 or similar encoding if they want to emphasize to clients that they should not make assumptions about URL formats or infer meaning from them.

In the example above, I used a relative form of the URIs in the links, for example /people/98765. It might have been slightly more convenient for the client (although less convenient for the formatting of this blog post), if I had expressed the URIs in absolute form, e.g., http://bit.ly/2LAwguK. Clients only need to know the standard rules of URIs defined in the IETF specifications to convert between these two URI forms, so which form you choose to use is not as important as you might at first assume. Contrast this with the conversion from foreign key to URL described previously, which requires knowledge that is specific to the pet store API. Relative URLs have some advantages for server implementers, as described below, but absolute URLs are probably more convenient for most clients, which is perhaps why Google Drive and GitHub APIs use absolute URLs.

In short, using links instead of foreign keys to express relationships in APIs reduces the amount of information a client needs to know to use an API, and reduces the ways in which clients and servers are coupled to each other.

Caveats
Here are some things you should think about before using links.

Many API implementations have reverse proxies in front of them for security, load-balancing, and other reasons. Some proxies like to rewrite URLs. When an API uses foreign keys to represent relationships, the only URL that has to be rewritten by a proxy is the main URL of the request. In HTTP, that URL is split between the address line (the first header line) and the host header.

In an API that uses links to express relationships, there will be other URLs in the headers and bodies of both the request and the response that would also need to be rewritten. There are a few different ways of dealing with this:

  1. Don’t rewrite URLs in proxies. I try to avoid URL rewriting, but this may not be possible in your environment.

  2. In the proxy, be careful to find and map all URLs wherever they appear in the header or body of the request and response. I have never done this, because it seems to me to be difficult, error-prone, and inefficient, but others may have done it.

  3. Write all links relatively. In addition to allowing proxies some ability to rewrite URLs, relative URLs may make it easier to use the same code in test and production, because the code does not have to be configured with knowledge of its own host name. Writing links using relative URLs with a single leading slash, as I showed in the example above, has few downsides for the server or the client, but it only allows the proxy to change the host name (more precisely, the parts of the URL called the scheme and authority), not the path. Depending on the design of your URLs, you could allow proxies some ability to rewrite paths if you are willing to write links using relative URLs with no leading slashes, but I have never done this because I think it would be complicated for servers to write those URLs reliably. Relative URLs without leading slashes are also more difficult for clients to use—they need to use a standards-compliant library rather than simple string concatenation to handle those URLs, and they need to be careful to understand and preserve the base URL. Using a standards-compliant library to handle URLs is good practice for clients anyway, but many don’t.

Using links may also cause you to re-examine how you do API versioning. Many APIs like to put version numbers in URLs, like this:

This is the kind of versioning where the data for a single resource can be viewed in more than one “format” at the same time—these are not the sort of versions that replace each other in time sequence as edits are made.

This is closely analogous to being able to see the same web document in different natural languages, for which there is a web standard; it is a pity there isn’t a similar one for versions. By giving each version its own URL, you raise each version to the status of a full web resource. There is nothing wrong with “version URLs” like these, but they are not suitable for expressing links. If a client asks for Lassie in the version 2 format, it does not mean that they also want the version 2 format of Lassie’s owner, Joe, so the server can’t pick which version number to put in a link. There may not even be a version 2 format for owners. It also doesn’t make conceptual sense to use the URL of a particular version in links—Lassie is not owned by a specific version of Joe, she is owned by Joe himself. So, even if you expose a URL of the form /v1/people/98765 to identify a specific version of Joe, you should also expose the URL /people/98765 to identify Joe himself and use the latter in links. Another option is to define only the URL /people/98765 and allow clients to request a specific version by including a request header. There is no standard for this header, but calling it Accept-Version would fit well with the naming of the standard headers. I personally prefer the approach of using a header for versioning and avoiding URLs with version numbers, but URLs with version numbers are popular, and I often implement both a header and “version URLs” because it’s easier to do both than argue about it. For more on API versioning, check out this blog post.

You might still need to document some URL templates
In most web APIs, the URL of a new resource is allocated by the server when the resource is created using POST. If you use this method for creation and you are using links for relationships, you do not need to publish a URI template for the URIs of these resources. However, some APIs allow the client to control the URL of a new resource. Letting clients control the URL of new resources makes many patterns of API scripting much easier for client programmers, and it also supports scenarios where an API is used to synchronize an information model with an external information source. HTTP has a special method for this purpose: PUT. PUT means “create the resource at this URL if it does not already exist, otherwise update it”1. If your API allows clients to create new entities using PUT, you have to document the rules for composing new URLs, probably by including a URI template in the API specification. You can also allow clients partial control of URLs by including a primary key-like value in the body or headers of a POST. This doesn’t require a URI template for the POST itself, but the client will still need to learn a URI template to take advantage of the resulting predictability of URIs.

The other place where it makes sense to document URL templates is when the API allows clients to encode queries in URLs. Not every API lets you query its resources, but this can be a very useful feature for clients, and it is natural to let clients encode queries in URLs and use GET to retrieve the result. The following example shows why.

In the example above we included the following name/value pair in the representation of Joe:

The client doesn’t have to know anything about the structure of this URL, beyond what’s written in standard specifications, to use it. This means that a client can get the list of Joe’s pets from this link without learning any query language, and without the API having to document its URL formats—but only if the client first does a GET on /people/98765. If, in addition, the pet store API documents a query capability, the client can compose the same or equivalent query URL to retrieve the pets for an owner without first retrieving the owner—it is sufficient to know the owner’s URI. Perhaps more importantly, the client can also form queries like the following ones that would otherwise not be possible:

The URI specification describes a portion of the HTTP URL for this purpose called the query component—the portion of the URL after the first “?” and before the first “#”. The style of query URI that I prefer always puts client-specified queries in the query component of the URI, but it’s also permissible to express client queries in the path portion of a URL. In either case, you need to describe to clients how to compose these URLs—you are effectively designing and documenting a query language specific to your API. Of course, you can also allow clients to put queries in the request body rather than the URL and use the POST method instead of GET. Since there are practical limits on the size of a URL—anything over 4k bytes is tempting fate—it is a good practice to support POST for queries even if you also support GET.

Because query is such a useful feature in APIs, and because designing and implementing query languages is not easy, technologies like GraphQL have emerged. I have never used GraphQL, so I can’t endorse it, but you may want to evaluate it as an alternative to designing and implementing your own API query capability. API query capabilities, including GraphQL, are best used as a complement to a standard HTTP API for reading and writing resources, not an alternative.

And another thing… What’s the best way to write links in JSON?
Unlike HTML, JSON has no built-in mechanism for expressing links. Many people have opinions on how links should be expressed in JSON and some have published their opinions in more or less official-looking documents, but there is no standard ratified by a recognized standards organization at the time of writing. In the examples above, I used simple JSON name/value pairs to express links—this is my preferred style, and is also the style used by Google Drive and GitHub. Another style that you will likely encounter looks like this:

I personally don’t see the merits of this style, but several variants of it have achieved some level of popularity.

There is another style for links in JSON that I do like, which looks like this:

The benefit of this style is that it makes it explicit that “/people/98765” is a URL and not just a string. I learned this pattern from RDF/JSON. One reason to adopt this pattern is that you will probably have to use it anyway whenever you have to show information about one resource nested inside another, as in the following example, and using it everywhere gives a nice uniformity:

For further ideas on how best to use JSON to represent data, see Terrifically Simple JSON.

Finally, what’s the difference between an attribute and a relationship?
I think most people would agree with the statement that JSON does not have a built-in mechanism for expressing links, but there is a way of thinking about JSON that says otherwise. Consider this JSON:

A common view is that shoeSize is an attribute, not a relationship, and 10 is a value, not an entity. However, it is also reasonable to say that the string ’10” is in fact a reference, written in a special notation for writing references to numbers, to the eleventh whole number, which itself is an entity. If the eleventh whole number is a perfectly good entity, and the string ’10’ is just a reference to it, then the name/value pair ‘”shoeSize”: 10’ is conceptually a link, even though it doesn’t use URIs.

The same argument can be made for booleans and strings, so all JSON name/value pairs can be viewed as links. If you think this way of looking at JSON makes sense, then it’s natural to use simple JSON name/value pairs for links to entities that are referenced using URLs in addition to those that are referenced using JSON’s built-in reference notations for numbers, strings, booleans and null.

This argument says more generally that there is no fundamental difference between attributes and relationships; attributes are just relationships between an entity and an abstract or conceptual entity like a number or color that has historically been treated specially. Admittedly, this is a rather abstract way of looking at the world—if you show most people a black cat, and ask them how many objects they see, they will say one. Not many would say they see two objects—a cat, and the color black—and a relationship between them.

Links are simply better
Web APIs that pass out database keys rather than links are harder to learn and harder to use for clients. They also couple clients and servers more tightly together by requiring more shared knowledge and so they require more documentation to be written and read. Their only advantage is that because they are so common, programmers have become familiar with them and know how to produce them and consume them. If you strive to offer your clients high-quality APIs that don’t require a ton of documentation and maximize independence of clients from servers, think about exposing URLs rather than database keys in your web APIs.

For more on API design, read the eBook “Web API Design: The Missing Link.

1. This meaning can be refined using the if-match and if-not-match headers

Play the original ‘Minecraft’ in your browser, for free

The content below is taken from the original ( Play the original ‘Minecraft’ in your browser, for free), to continue reading please visit the site. Remember to respect the Author & Copyright.

Minecraft is celebrating its 10th birthday by making its Classic version easily playable on web browsers. You don't need to download any files to make it work, and you don't have to pay a cent for access. Since Classic was only the second phase in th…

Creating and deploying a model with Azure Machine Learning Service

The content below is taken from the original ( Creating and deploying a model with Azure Machine Learning Service), to continue reading please visit the site. Remember to respect the Author & Copyright.

In this post, we will take a look at creating a simple machine learning model for text classification and deploying it as a container with Azure Machine Learning service. This post is not intended to discuss the finer details of creating a text classification model. In fact, we will use the Keras library and its Reuters newswire dataset to create a simple dense neural network. You can find many online examples based on this dataset. For further information, be sure to check out and buy 👍 Deep Learning with Python by François Chollet, the creator of Keras and now at Google. It contains a section that explains using this dataset in much more detail!

Machine Learning service workspace

To get started, you need an Azure subscription. Once you have the subscription, create a Machine Learning service workspace. Below, you see such a workspace:

My Machine Learning service workspace (gebaml)

Together with the workspace, you also get a storage account, a key vault, application insights and a container registry. In later steps, we will create a container and store it in this registry. That all happens behind the scenes though. You will just write a few simple lines of code to make that happen!

Note the Authoring (Preview) section! These were added just before Build 2019 started. For now, we will not use them.

Azure Notebooks

To create the model and interact with the workspace, we will use a free Jupyter notebook in Azure Notebooks. At this point in time (8 May 2019), Azure Notebooks is still in preview. To get started, find the link below in the Overview section of the Machine Learning service workspace:

Getting Started with Notebooks

To quickly get the notebook, you can clone my public project: ⏩⏩⏩ https://notebooks.azure.com/geba/projects/textclassificationblog.

Creating the model

When you open the notebook, you will see the following first four cells:

Getting the dataset

It’s always simple if a prepared dataset is handed to you like in the above example. Above, you simply use the reuters class of keras.datasets and use the load_data method to get the data and directly assign it to variables to hold the train and test data plus labels.

In this case, the data consists of newswires with a corresponding label that indicates the category of the newswire (e.g. an earnings call newswire). There are 46 categories in this dataset. In the real world, you would have the newswire in text format. In this case, the newswire has already been converted (preprocessed) for you in an array of integers, with each integer corresponding to a word in a dictionary.

A bit further in the notebook, you will find a Vectorization section:

Vectorization

In this section, the train and test data is vectorized using a one-hot encoding method. Because we specified, in the very first cell of the notebook, to only use the 10000 most important words each article can be converted to a vector with 10000 values. Each value is either 1 or 0, indicating the word is in the text or not.

This bag-of-words approach is one of the ways to represent text in a data structure that can be used in a machine learning model. Besides vectorizing the training and test samples, the categories are also one-hot encoded.

Now the dense neural network model can be created:

Dense neural net with Keras

The above code defines a very simple dense neural network. A dense neural network is not necessarily the best type but that’s ok for this post. The specifics are not that important. Just note that the nn variable is our model. We will use this variable later when we convert the model to the ONNX format.

The last cell (16 above) does the actual training in 9 epochs. Training will be fast because the dataset is relatively small and the neural network is simple. Using the Azure Notebooks compute is sufficient. After 9 epochs, this is the result:

Training result

Not exactly earth-shattering: 78% accuracy on the test set!

Saving the model in ONNX format

ONNX is an open format to store deep learning models. When your model is in that format, you can use the ONNX runtime for inference.

Converting the Keras model to ONNX is easy with the onnxmltools:

Converting the Keras model to ONNX

The result of the above code is a file called reuters.onnx in your notebook project.

Predict with the ONNX model

Let’s try to predict the category of the first newswire in the test set. Its real label is 3, which means it’s a newswire about an earnings call (earn class):

Inferencing with the ONNX model

We will use similar code later in score.py, a file that will be used in a container we will create to expose the model as an API. The code is pretty simple: start an inference session based on the reuters.onnx file, grab the input and output and use run to predict. The resulting array is the output of the softmax layer and we use argmax to extract the category with the highest probability.

Saving the model to the workspace

With the model in reuters.onnx, we can add it to the workspace:

Saving the model in the workspace

You will need a file in your Azure Notebook project called config.json with the following contents:

{
     "subscription_id": "<subscription-id>",
     "resource_group": "<resource-group>",
     "workspace_name": "<workspace-name>" 
} 

With that file in place, when you run cell 27 (see above), you will need to authenticate to Azure to be able to interact with the workspace. The code is pretty self-explanatory: the reuters.onnx model will be added to the workspace:

Models added to the workspace

As you can see, you can save multiple versions of the model. This happens automatically when you save a model with the same name.

Creating the scoring container image

The scoring (or inference) container image is used to expose an API to predict categories of newswires. Obviously, you will need to give some instructions how scoring needs to be done. This is done via score.py:

score.py

The code is similar to the code we wrote earlier to test the ONNX model. score.py needs an init() and run() function. The other functions are helper functions. In init(), we need to grab a reference to the ONNX model. The ONNX model file will be placed in the container during the build process. Next, we start an InferenceSession via the ONNX runtime. In run(), the code is similar to our earlier example. It predicts via session.run and returns the result as JSON. We do not have to worry about the rest of the code that runs the API. That is handled by Machine Learning service.

Note: using ONNX is not a requirement; we could have persisted and used the native Keras model for instance

In this post, we only need score.py since we do not train our model via Azure Machine learning service. If you train a model with the service, you would create a train.py file to instruct how training should be done based on data in a storage account for instance. You would also provision compute resources for training. In our case, that is not required so we train, save and export the model directly from the notebook.

Training and scoring with Machine Learning service

Now we need to create an environment file to indicate the required Python packages and start the image build process:

Create an environment yml file via the API and build the container

The build process is handled by the service and makes sure the model file is in the container, in addition to score.py and myenv.yml. The result is a fully functional container that exposes an API that takes an input (a newswire) and outputs an array of probabilities. Of course, it is up to you to define what the input and output should be. In this case, you are expected to provide a one-hot encoded article as input.

The container image will be listed in the workspace, potentially multiple versions of it:

Container images for the reuters ONNX model

Deploy to Azure Container Instances

When the image is ready, you can deploy it via the Machine Learning service to Azure Container Instances (ACI) or Azure Kubernetes Service (AKS). To deploy to ACI:

Deploying to ACI

When the deployment is finished, the deployment will be listed:

Deployment (ACI)

When you click on the deployment, the scoring URI will be shown (e.g. http://IPADDRESS:80/score). You can now use Postman or any other method to score an article. To quickly test the service from the notebook:

Testing the service

The helper method run of aci_service will post the JSON in test_sample to the service. It knows the scoring URI from the deployment earlier.

Conclusion

Containerizing a machine learning model and exposing it as an API is made surprisingly simple with Azure Machine learning service. It saves time so you can focus on the hard work of creating a model that performs well in the field. In this post, we used a sample dataset and a simple dense neural network to illustrate how you can build such a model, convert it to ONNX format and use the ONNX runtime for scoring.

Now generally available: Android phone’s built-in security key

The content below is taken from the original ( Now generally available: Android phone’s built-in security key), to continue reading please visit the site. Remember to respect the Author & Copyright.

Phishing—when an attacker tries to trick you into turning over your online credentials—is one of the most common causes of security breaches. At Google Cloud Next ‘19, we enabled you to help your users defend against phishing with a security key built into their Android phone, bringing the benefits of a phishing-resistant two-factor authentication (2FA) to more than a billion users worldwide. This capability is now generally available.

While Google automatically blocks the overwhelming majority of malicious sign-in attempts (even if an attacker has a username or password), 2FA, also known as 2-Step Verification (2SV), considerably improves user security. At the same time, sophisticated attacks can skirt around some 2FA methods to compromise user accounts. We consider security keys based on FIDO standards, including Titan Security Key and Android phone’s built-in security key, to be the strongest, most phishing-resistant methods of 2FA. FIDO leverages public key cryptography to verify a user’s identity and URL of the login page, so that an attacker can’t access users’ accounts even if users are tricked into providing their username and password.

User experience on Pixel 3 .gif

Security keys are now available built-in on phones running Android 7.0+ (Nougat) at no additional cost. That way, your users can use their phones as their primary 2FA method for work (G Suite, Cloud Identity, and GCP) and personal Google Accounts to sign in on Bluetooth-enabled Chrome OS, macOS X, or Windows 10 devices with a Chrome browser. This gives them the strongest 2FA method with the convenience of a phone that’s always in their pocket.

As the Google Cloud administrator, start by activating Android phone’s built-in security key to protect your own work or personal Google Account following these simple steps:

  1. Add your work or personal Google Account to your Android phone.
  2. Make sure you’re enrolled in 2-Step Verification (2SV).
  3. On your computer, visit the 2SV settings and click “Add security key”.
  4. Choose your Android phone from the list of available devices—and you’re done!
2-Step Verification (2SV) settings page.png

When signing in, make sure Bluetooth is turned on on both your phone and the device you are signing in on. You can find more detailed instructions here.

To help ensure the highest levels of account protection, you can also require the use of security keys for your users in G Suite, Cloud Identity, and GCP, letting them choose between using a physical security key, their Android phone, or both. We recommend that users register a backup security key to their account and keep it in a safe place, so that they can gain access to their account if they lose their phone. Hardware security keys are available from a number of vendors, including Google with our Titan Security Key.

How to Spot an AI-Generated Photo

The content below is taken from the original ( How to Spot an AI-Generated Photo), to continue reading please visit the site. Remember to respect the Author & Copyright.

The first time I explained to my son, as a preschooler, that cartoon characters were not really real but drawn and animated on a computer, it blew his mind. How could something that looked so realistic actually be fake? I think I now know how he felt; lately, the more I learn about AI-generated photos, the more my…

Read more…

Traffic-free days have begun in Edinburgh city centre

The content below is taken from the original ( Traffic-free days have begun in Edinburgh city centre), to continue reading please visit the site. Remember to respect the Author & Copyright.

(Photo by Stewart Kirby/SOPA Images/LightRocket via Getty Images)

The scheme is part of a plan to reduce air pollution in the city

How to Tell if a News Site Is Reliable

The content below is taken from the original ( How to Tell if a News Site Is Reliable), to continue reading please visit the site. Remember to respect the Author & Copyright.

It’s happened before, and with another presidential election looming next year, it’s going to happen again and again. The spread of “fake news,” the incorrect labeling of real news as “fake,” and overall confusion as to how to tell the difference.

Read more…

Steve Fryatt talks lesser known software in Wakefield on 1st May

The content below is taken from the original ( Steve Fryatt talks lesser known software in Wakefield on 1st May), to continue reading please visit the site. Remember to respect the Author & Copyright.

Even though the show they organise took place only yesterday, the Wakefield RISC OS Computer Club (WROCC) will be holding their next meeting this week, on Wednesday 1st May. The guest speaker will be the group’s own Steve Fryatt. Steve […]

An easier way to integrate Chrome devices with Active Directory infrastructure

The content below is taken from the original ( An easier way to integrate Chrome devices with Active Directory infrastructure), to continue reading please visit the site. Remember to respect the Author & Copyright.

In 2017, when we launched Active Directory integration as part of our Chrome Enterprise announcement, we aimed to help customers with on-premise infrastructure leverage the benefits of Chrome devices in their organizations. This integration allowed for use of Active Directory credentials to authenticate across devices, support for Android applications through managed Google Play, and management of user and device policies for IT admins via GPO. All of this can be done without additional infrastructure, minimizing disruption for users and IT alike.

With the release of Chrome Enterprise version 74, we have made Active Directory integration available to existing Chrome Enterprise customers who are already managing Chrome devices with cloud management on their domain currently. Administrators can now configure their Chrome devices to be managed by Active Directory or cloud management, without the need to set up a separate domain. We have also made it easy to switch management methods based on what is most appropriate for your organization at any given time. This can be completed with a simple administration policy.

microsoft active drive chrome integration.png

In recent months, we’ve also have made other features available that offer IT admins greater control and access. These features include support for native Samba (SMB) file shares with kerberos authentication and app configuration via ADMX templates for Chrome apps and extensions that support policy for configuration.

Native integration with Active Directory is a good option for customers who wish to move incrementally towards a cloud-native solution while continuing to leverage their existing Active Directory environment. Use cases include:

  • Quick pilots: Deploy Chrome Enterprise quickly by integrating with existing identity, infrastructure, and management systems to pilot and test with minimal friction.

  • Supporting kerberos: Integrate easily with your existing infrastructure and applications that require kerberos authentication.

  • Handling on-prem: Support environments where an on-premises solution is required or preferred for managing devices, identity, and policy.

  • Centralizing management: Support mixed device deployments to manage all your devices from a single, Active Directory-based management solution

Current users of Active Directory integration will be automatically upgraded to the new version. This means all your existing devices will continue to function in the same way and administrators now have added flexibility to enable or disable Active Directory management based on your organization’s needs—no manual changes necessary.

To learn more, read ourhelp center article.

How to Create a Windows Virtual Desktop Tenant with Windows Virtual Desktop

The content below is taken from the original ( How to Create a Windows Virtual Desktop Tenant with Windows Virtual Desktop), to continue reading please visit the site. Remember to respect the Author & Copyright.


In the first part of this series, I described what Microsoft’s Windows Virtual Desktop (WVD) service is and the basic requirements. If you haven’t already read that article, I suggest you do before continuing with WVD because there are some important prerequisites that need to be in place.

Before you can create a host pool in the Azure management portal, you need to create a Windows Virtual Desktop tenant. There are several steps to this process:

  1. Give Azure Active Directory permissions to the Windows Virtual Desktop enterprise app.
  2. Assign an AAD user the Windows Virtual Desktop TenantCreator application role.
  3. Create a Windows Virtual Desktop tenant.

Please note that everything in this article is subject to change because Windows Virtual Desktop is in preview. Additionally, when using an AAD user account, make sure that it is a work or school account and not a Microsoft Account (MSA). I’ll remind you about this again.

Grant Azure Active Directory Permissions to Windows Virtual Desktop Service

Giving ADD permissions to the WVD service lets it query the directory for administrative and end-user actions. All you need to do is click here to open the Windows Virtual Desktop consent page in a browser.

  • There are two consent options: Server App and Client App. Make sure that Server App is selected.
  • In the AAD Tenant GUID or Name box, type the name or GUID of your AAD and click Submit. If you are not sure what your AAD name is, open the Azure AD management portal here and click Azure Active Directory on the left of the portal.
Creating a Windows Virtual Desktop Tenant (Image Credit: Russell Smith)
Creating a Windows Virtual Desktop Tenant (Image Credit: Russell Smith) Creating a Windows Virtual Desktop Tenant (Image Credit: Russell Smith)
  • You’ll be prompted to sign in to AAD. Use a Global Administrator account that is a work or school account. I.e. Not a Microsoft Account (MSA). If you are not sure which AAD users are work and school accounts, open the Azure AD management portal here, click Users on the left of the portal, and you’ll see the user type listed in the Source column for each user account. Work and school accounts will be listed as Azure Active Directory under Source.
  • Once signed in, you’ll be asked to accept a series of permissions for the Windows Virtual Desktop app. Click Accept. You’ll be redirected to a confirmation page.
Creating a Windows Virtual Desktop Tenant (Image Credit: Russell Smith)
Creating a Windows Virtual Desktop Tenant (Image Credit: Russell Smith) Creating a Windows Virtual Desktop Tenant (Image Credit: Russell Smith)
  • Wait one minute for the Server App permissions to register in AAD and then repeat this process for Client App.

Assign TenantCreator Role to AAD User

Now you need to assign the TenantCreator application role to an AAD user.

  • Open the Azure AD management portal here.
  • Sign in to AAD with a global administrator account.
  • Click Enterprise Applications on the left of the portal.
  • In the list of apps, you should see Windows Virtual Desktop and Windows Virtual Desktop Client. Click Windows Virtual Desktop.
  • Click Users and groups on the left of the portal window.
  • Click + Add user.
Creating a Windows Virtual Desktop Tenant (Image Credit: Russell Smith)
Creating a Windows Virtual Desktop Tenant (Image Credit: Russell Smith) Creating a Windows Virtual Desktop Tenant (Image Credit: Russell Smith)
  • Under Add Assignment, click Users.
  • Select a Global Administrator work or school account from the list, i.e. not an MSA account, and then click Select.
  • Click Assign under Add Assignment.
  • Close the AAD management portal.

Create a Windows Virtual Desktop Tenant

The last step is to create the tenant itself.

  • Open a PowerShell prompt in Windows 10.
  • Download and import the Windows Virtual Desktop PowerShell module.

Install-Module -Name Microsoft.RDInfra.RDPowerShell
Import-Module -Name Microsoft.RDInfra.RDPowerShell

  • Sign in to Windows Virtual Desktop using the AAD account to which you assigned the TenantCreator application role above.

Add-RdsAccount -DeploymentUrl "https://rdbroker.wvd.microsoft.com"

  • Create a new tenant using the New-RdsTenant cmdlet as shown here, replacing the AadTenantID with your Azure AD directory ID and the AzureSubscriptionId with your subscription’s ID. You can find your Azure subscription ID in the Subscriptions section of the Azure management portal. Similarly, you can find your Azure AD directory ID in the Azure AD portal under Azure Active Directory > Properties.

New-RdsTenant -Name PetriWVD -AadTenantId xxxx-xxxx-xxxxx-xxxxx -AzureSubscriptionId xxxx-xxxx-xxxxx-xxxxx

Creating a Windows Virtual Desktop Tenant (Image Credit: Russell Smith)
Creating a Windows Virtual Desktop Tenant (Image Credit: Russell Smith) Creating a Windows Virtual Desktop Tenant (Image Credit: Russell Smith)

And that is it! Now you are ready to create a hosting pool in the Azure management portal. As you can see, the process of creating a tenant isn’t exactly intuitive or straightforward. Which is a shame because creating a hosting pool is easier. But this is just the preview stage and Microsoft will hopefully make this process simpler and integrate it with the Azure management portal before general availability.

In the third part of this series, I’ll show you how to create a hosting pool in the Azure management portal.

 

The post How to Create a Windows Virtual Desktop Tenant with Windows Virtual Desktop appeared first on Petri.

How to Introduce Your Kid to Coding Without a Computer

The content below is taken from the original ( How to Introduce Your Kid to Coding Without a Computer), to continue reading please visit the site. Remember to respect the Author & Copyright.

If you want to teach your kid how to code, there’s certainly no shortage of apps, iPad-connected toys, motorized kits and programmable pets that you can buy for your future Google employee. Some are great, no doubt, but many focus on isolated skills, which may or may not be relevant in the decades ahead. For young…

Read more…

How to Secure Hybrid Office 365 Authentication

The content below is taken from the original ( How to Secure Hybrid Office 365 Authentication), to continue reading please visit the site. Remember to respect the Author & Copyright.


Office 365 hybrid authentication lets organizations manage and control authentication to Office 365 using on-premise Windows Server Active Directory. The advantage is that there is a single set of user identities that can be centrally managed, as opposed to using cloud-only identities for Office 365 and Active Directory accounts for access to on-premise resources.

Traditional wisdom has it that Active Directory Federation Services (ADFS), or a third-party identity provider, is the most secure way to extend Windows Server Active Directory (AD) to Office 365. There are definitely some advantages to this approach, including:

  • Single sign-on for browser apps and Outlook.
  • No synchronization of password hashes to the cloud.
  • Advanced security features like IP address filtering.
  • Supports other SAML-based cloud services.
  • Supports SmartLinks.
  • Smartcard-based authentication.
  • Supports third-party multifactor authentication.

But the costs are great. Not only do you need a two-server farm, preferably at separate sites for redundancy but also another couple of servers should be placed in your DMZ to securely publish ADFS to the Internet. This involves additional infrastructure and cost, but it also adds extra points of failure. If ADFS, AD, or the DMZ servers go down, users won’t be able to access Office 365. Although, it is possible to combine ADFS with Password Hash Synchronization (PHS) so that users can still log in to Office 365 in the event of a problem.

Who’s Afraid of Password Hash Synchronization?

The idea of synchronizing password hashes to the cloud seems like a scary idea for some organizations. But is it really that bad? If you don’t trust Azure AD to be the guardian of your Office 365 data, then you’ve already got a problem. You need to decide whether Azure AD is up to the job of securing sensitive data in the cloud. Azure AD can be vulnerable to brute-force and password spray attacks through remote PowerShell, but this can be mitigated by enabling multifactor authentication (MFA) and disabling access to remote PowerShell for some or all users.

When AD Connect is configured to synchronize password hashes from AD to the cloud, SHA256 password data is stored in Azure AD, which is a hash of the MD4 hash stored in on-premise Active Directory. So the password hashes in Azure AD are more secure hashes of your on-premise AD password hashes. Or in other words, hashes of hashes. Furthermore, the SHA256 hashes in Azure AD cannot be used in Pass-the-Hash (PtH) attacks against your on-premise AD.

PHS gives users seamless single sign-on access to Office 365 regardless of whether ADFS and on-premise AD are accessible, which can be handy in an outage. It is also simpler to set up than federated authentication or Pass-Through Authentication (PTA), both of which require some onsite infrastructure. PHS fully supports Microsoft’s Azure AD defense technologies, like Azure Password Protection and Smart Lockout.

For more information on custom banned password lists (Azure Password Protection) and Smart Lockout see Azure AD Password Protection to Prevent Password Spraying Attacks on Petri. Here can find more details on Azure AD Conditional Access.

Federated Authentication versus Password Hash Synchronization

If you are planning an Office 365 deployment or reviewing existing security strategies, I recommend looking at Password Hash Synchronization first. It could simplify your infrastructure, reduce costs, improve availability, and even make you more secure in the long run. But that’s not to say that federated authentication and Pass-Through Authentication don’t have their places. Just don’t rule out PHS until you’ve evaluated it properly.

 

 

The post How to Secure Hybrid Office 365 Authentication appeared first on Petri.

Game Backup Monitor lets you backup games automatically

The content below is taken from the original ( Game Backup Monitor lets you backup games automatically), to continue reading please visit the site. Remember to respect the Author & Copyright.

If you often play games on your computer, you should check out Game Backup Monitor. It will help you automatically backup the configuration files of your games. It is a free and open-source software that is available for multiple computer […]

This post Game Backup Monitor lets you backup games automatically is from TheWindowsClub.com.

How to stay on top of Azure best practices

The content below is taken from the original ( How to stay on top of Azure best practices), to continue reading please visit the site. Remember to respect the Author & Copyright.

Optimizing your cloud workloads can seem like a complex and daunting task. We created Azure Advisor, a personalized guide to Azure best practices, to make it easier to get the most out of Azure. Azure Advisor helps you optimize your Azure resources for high availability, security, performance, and cost by providing free, personalized recommendations based on your usage and configurations.

We’ve posted a new video series to help you learn how to use Advisor to optimize your Azure workloads. You’ll find out how to:

Watch one of the first videos in the series now:

Once you’re comfortable with Advisor, you can begin reviewing and remediating your recommendations. Visit Advisor in the Azure portal to get started, and for more in-depth guidance see the Advisor documentation. Let us know if you have a suggestion for Advisor by submitting an idea via our tool, or send us an email at [email protected].

How to stay on top of Azure best practices

The content below is taken from the original ( How to stay on top of Azure best practices), to continue reading please visit the site. Remember to respect the Author & Copyright.

Optimizing your cloud workloads can seem like a complex and daunting task. We created Azure Advisor, a personalized guide to Azure best practices, to make it easier to get the most out of Azure. Azure Advisor helps you optimize your Azure resources for high availability, security, performance, and cost by providing free, personalized recommendations based on your usage and configurations.

We’ve posted a new video series to help you learn how to use Advisor to optimize your Azure workloads. You’ll find out how to:

Watch one of the first videos in the series now:

Once you’re comfortable with Advisor, you can begin reviewing and remediating your recommendations. Visit Advisor in the Azure portal to get started, and for more in-depth guidance see the Advisor documentation. Let us know if you have a suggestion for Advisor by submitting an idea via our tool, or send us an email at [email protected].