Posted on in category News

Microsoft made its AI work on a $10 Raspberry Pi

The content below is taken from the original (Microsoft made its AI work on a $10 Raspberry Pi), to continue reading please visit the site. Remember to respect the Author & Copyright.

When you’re far from a cell tower and need to figure out if that bluebird is Sialia sialis or Sialia mexicana, no cloud server is going to help you. That’s why companies are squeezing AI onto portable devices, and Microsoft has just taken that to a new extreme by putting deep learning algorithms onto a Raspberry Pi. The goals is to get AI onto "dumb" devices like sprinklers, medical implants and soil sensors to make them more useful, even if there’s no supercomputer or internet connection in sight.

The idea came about from Microsoft Labs teams in Redmond and Bangalore, India. Ofer Dekel, who manages an AI optimization group at the Redmond Lab, was trying to figure out a way to stop squirrels from eating flower bulbs and seeds from his bird feeder. As one does, he trained a computer vision system to spot squirrels, and installed the code on a $35 Raspberry Pi 3. Now, it triggers the sprinkler system whenever the rodents pop up, chasing them away.

"Every hobbyist who owns a Raspberry Pi should be able to do that," Dekel said in Microsoft’s blog. "Today, very few of them can." The problems is that it’s too expensive and impractical to install high-powered chips or connected cloud-computing devices on things like squirrel sensors. However, it’s feasible to equip sensors and other devices with a $10 Raspberry Zero or the pepper-flake-sized Cortex M0 chip pictured above.

<>
<>All the squirrel-spotting power you need (Matt Brian/AOL)

To make it work on systems that often have just a few kilobytes of RAM, the team compressed neural network parameters down to just a few bits instead of the usual 32. Another technique is "sparsification" of algorithms, a way of pruning them down to remove redundancies. By doing that, they were able to make an image detection system run about 20 times faster on a Raspberry Pi 3 without any loss of accuracy.

However, taking it to the next level won’t be quite as easy. "There is just no way to take a deep neural network, have it stay as accurate as it is today, and consume 10,000 times less resources. You can’t do it," said Dekel. For that, they’ll need to invent new types of AI tech tailored for low-powered devices, and that’s tricky, considering researchers still don’t know exactly how deep learning tools work.

Microsoft’s researchers are working on a few projects for folks with impairments, like a walking stick that can detect falls and issue a call for help, and "smart gloves" that can interpret sign language. To get some new ideas and help, they’ve made some of their early training tools and algorithms available to Raspberry Pi hobbyists and other researchers on Github. "Giving these powerful machine-learning tools to everyday people is the democratization of AI," says researcher Saleema Amershi.

Via: Mashable

Source: Microsoft

Posted on in category News

New Power Bundle for Amazon WorkSpaces – More vCPUs, Memory, and Storage

The content below is taken from the original (New Power Bundle for Amazon WorkSpaces – More vCPUs, Memory, and Storage), to continue reading please visit the site. Remember to respect the Author & Copyright.

Are you tired of hearing me talk about Amazon WorkSpaces yet? I hope not, because we have a lot of customer-driven additions on the roadmap! Our customers in the developer and analyst community have been asking for a workstation-class machine that will allow them to take advantage of the low cost and flexibility of WorkSpaces. Developers want to run Visual Studio, IntelliJ, Eclipse, and other IDEs. Analysts want to run complex simulations and statistical analysis using MatLab, GNU Octave, R, and Stata.

New Power Bundle
Today we are extending the current set of WorkSpaces bundles with a new Power bundle. With four vCPUs, 16 GiB of memory, and 275 GB of storage (175 GB on the system volume and another 100 GB on the user volume), this bundle is designed to make developers, analysts, (and me) smile. You can launch them in all of the usual ways: Console, CLI (create-workspaces), or API (CreateWorkSpaces):

One really interesting benefit to using a cloud-based virtual desktop for simulations and statistical analysis is the ease of access to data that’s already stored in the cloud. Analysts can mine and analyze petabytes of data stored in S3 that is effectively local (with respect to access time) to the WorkSpace. This low-latency access will boost productivity and also simplifies the use of other AWS data analysis tools such as Amazon Redshift, Amazon Redshift Spectrum, Amazon QuickSight, and Amazon Athena.

Like the existing bundles, the new Power bundle can be used in either billing configuration, AlwaysOn or AutoStop (read Amazon WorkSpaces Update – Hourly Usage and Expanded Root Volume to learn more). The bundle is available in all AWS Regions where WorkSpaces is available and you can launch one today! Visit the WorkSpaces Pricing page for pricing in your region.

— Jeff;

Posted on in category News

Not sure where to store your bikes? How about a fake skip

The content below is taken from the original (Not sure where to store your bikes? How about a fake skip), to continue reading please visit the site. Remember to respect the Author & Copyright.

Dummy skip provides accommodation for several bicycles and – the theory goes – will not be looked at by thieves

Biskiple

Dummy skip provides accommodation for several bicycles and – the theory goes – will not be looked at by thieves

Posted on in category News

Ireland the best place to set up a data center in the EU

The content below is taken from the original (Ireland the best place to set up a data center in the EU), to continue reading please visit the site. Remember to respect the Author & Copyright.

A report from a data center consulting group BroadGroup says Ireland is the best place, at least in Europe, to set up a data center. It cites connectivity, taxes and active government support among the reasons.

BroadGroup’s report argued Ireland’s status in the EU, as well as its “low corporate tax environment,” make it an attractive location. It also cites connectivity, as Ireland will get a direct submarine cable system from Ireland to France—bypassing the U.K.—in 2019. The country also has a high installed base of fibre and dark fibre with further deployment planned.

The report also notes active government support for inward investment from companies such as Amazon and Microsoft has resulted in the construction of massive facilities around Dublin.

“Even now, authorities are seeking to identify potential land banks for new large-scale data centre facilities in Ireland, which indicates that the supply of more space will continue to enter the market,” the report says.

U.S. companies with data centers in Ireland

Amazon and Microsoft both have facilities in Dublin, with Microsoft’s being one of the largest in Europe. Now, Apple is looking to build a €850 million data center in Athenry, outside Dublin. It announced the plans two years ago, along with a sister location in Denmark.

Two years later, the Danish site is up and running, while Athenry hasn’t even broken ground due to legal problems because three people objected. Then the decision has been held up because there aren’t enough judges to make a ruling. The ruling is expected to go in Apple’s favor.

Other factors favoring Ireland is that it has benefitted from investment by U.S. firms from the gaming, pharmaceuticals and content sectors making the country their European headquarters. Also, data center investment covers a wide range of business models, making it the main hub for webscales regionally.

Renewable energy is also one reason for Ireland’s shine. EirGrid says potential data center power capacity could increase to 1,000 MW after 2019. Renewable energy—primarily from wind energy—is a key government priority and is targeting 40 percent by 2020, well beyond the EU mandatory benchmark of 16 percent. The proposed Apple data center would be powered 100 percent by renewable energy.

Of course, Ireland isn’t alone with its data center ambitions. Scotland recently saw the opening of a 60,000-sq.-ft. data center that can be expanded to 500,000 square feet.

Join the Network World communities on

Facebook

and

LinkedIn

to comment on topics that are top of mind.

Posted on in category News

New troubleshooting and diagnostics for Azure Files Storage mounting errors on Windows

The content below is taken from the original (New troubleshooting and diagnostics for Azure Files Storage mounting errors on Windows), to continue reading please visit the site. Remember to respect the Author & Copyright.

Azure File Storage offers fully managed file shares in the cloud using the Server Message Block (SMB) protocol, which is the predominantly used file share protocol for on-premises Windows use cases. Azure Files can be mounted from any client OS that implements the SMB versions supported by Azure Files. Today, we are introducing AzFileDiagnostics to help first time Azure Files file share users ensure that the Windows client environment has the correct prerequisites. AzFileDiagnostics automates detection of most of the symptoms mentioned in the troubleshooting Azure Files article and helps set up your environment and receive optimal performance.

In general, mounting a file share can be simply achieved on Windows using a standard “net use” command. When you create a share, Azure Portal automatically generates a “net use” command and makes it available for copy and pasting. One can simply click on the “Connect” button, copy the command for mounting this file share on your client, paste it and you have a drive with mounted file share. What could go wrong? Well, as it turns out, use of different clients, SMB versions, firewall rules, ISPs, or IT policies can affect connectivity to Azure Files. Good news is AzFileDiagnostics isolates and examines each source of possible issues and in turn provides you with advice or workarounds to correct the problem.

As an example, Azure Files supports SMB protocol version 2.1 and 3.0. To ensure secure connectivity, Azure Files requires communication from another region or from on premises to be encrypted. Thus, requiring SMB 3.0 channel encryption for those use-cases. AzFileDiagnostics detects the SMB version on the client and determines whether the client meets the necessary encryption requirement automatically.

How to use AzFileDiagnostics

You can download AzFileDiagnostics from Script center today and simply run:

PowerShell Command:

AzFileDiagnostics.ps1 [-StorageAccountName <storage account name>] [-FileShareName <share name>] [-EnvironmentName <AzureCloud| AzureChinaCloud| AzureGermanCloud| AzureUSGovernment>]

Usage Examples:

AzFileDiagnostics.ps1 
AzFileDiagnostics.ps1 -UncPath \\storageaccountname.file.core.windows.net\sharename 
AzFileDiagnostics.ps1 -StorageAccountName storageaccountname –FileShareName sharename –Environment AzureCloud 

In addition to diagnosing issues, it will present you with an option to mount the file share when the checks have successfully completed.

Learn more about Azure Files

Feedback

We hope that AzFileDiagnostics will make your getting started experience smoother. We love to hear your feedback. If there are additional troubleshooting topics for Azure Files that you would like to see, please leave a comment below. In addition to this, if you have any feature request, we are always listening to your feedback on our User Voice. Thanks!

Posted on in category News

VMware prepping NSX-as-a-service running from the public cloud

The content below is taken from the original (VMware prepping NSX-as-a-service running from the public cloud), to continue reading please visit the site. Remember to respect the Author & Copyright.

The content catalog for VMworld 2017 has appeared and as usual offers a few hints about announcements at the show and the company’s future plans.

Perhaps most interesting are the sessions pertaining to VMware’s partnership with Amazon Web Services. One is titled “VMware NSXaaS – Secure Native Workloads in AWS”. The session description says “VMWare NSXaaS provides you the ability to manage Networking and Security policies in Public Cloud environments such as AWS.”

Once we saw that “NSXaaS” reference we quickly spotted job ads that say “VMware NSX Team is building an elite team of Devops/SRE engineers to run our crown jewel project “NSXaaS” on Public Cloud.” Whoever gets the gig will be “… responsible to run NSX as a Service Reliably with no down time. This will include proactively finding service reliability issues & resolving them as well as responding to customer tickets as a line of defense before involving development engineering.”

Suffice to say, it looks like VMware’s going to NSX-as-a-service, which is interesting!

Another session, “VMware Cloud on AWS – Getting Started Workshop” offers the chance to “Be among the first to see the new VMware on AWS solution. You will interact with the VMware Cloud interface to perform basic tasks and manage your public cloud capacity.” That description is similar to other AWS-related sessions in that it offers demos of actual services, which suggests to The Register‘s virtualization desk that come VMworld USA in late August VMware-on-AWS will either have been launched or be very close to a debut.

Session titles like “VMware Cross Cloud Services – Getting Started” suggest Cross Cloud will also debut at or before the show.

A session titled “VMware Integrated OpenStack 4.0: What’s New” suggests a new release is in the works, given that we’re currently on version 3.1.

“VMware Cloud Foundation Futures” promises to show off “exciting new work being done using VCF as a platform in the areas of edge computing clusters, network function virtualisation, predictive analytics, and compliance.”

“Storage at Memory Speed: Finally, Nonvolatile Memory Is Here” looks like being VMware’s explanation of how it will put byte-addressable non-volatile memory, which it calls “PMEM” and the rest of us call Optane and pals, to work. The session promises “an overview of VMware virtualization for PMEM that is now running on real PMEM products.” Speed improvements from PMEM aren’t automatic, so it will be interesting to see what Virtzilla’s come up with.

VMware’s meat and potatoes – vSphere, vCenter and the like – don’t look to have a lot new to discuss other than enhancements to PowerCLI and the vSphere HTML 5 client.

Desktop hypervisors usually get a VMworld refresh and the catalog mentions “innovations being added to VMware Fusion, VMware Workstation, and VMware Horizon FLEX” in a session titled “What’s New with …” the above-mentioned products.

There’s no session description we could find that mentions VMware App Defence, the long-awaited security product The Register believes will emerge in Q3, but the catalog is sprinkled with mentions of endpoint security and VMware’s willingness to make it better with virtualization.

VMworld Europe is in September this year, so it also fits the Q3 timeframe if VMware wanted to keep the announcement of its new security offering as the big news for its continental followers.

If you spot another session that hints at new products or directions, hit the comments! ÂŽ

Posted on in category News

Tanks for the memories: Building a post-Microsoft Office cloud suite

The content below is taken from the original (Tanks for the memories: Building a post-Microsoft Office cloud suite), to continue reading please visit the site. Remember to respect the Author & Copyright.

Analysis Microsoft for decades not only defined personal productivity and team collaboration using Office, Outlook and Exchange – it kept the competition at arm’s length.

Today, however, there’s a large community of businesses that don’t look to Microsoft for collaboration or productivity solutions at all. In fact, Microsoft doesn’t even appear on their radar when they think of the cloud, the successor to on-premises software such as Office.

If you don’t have the complexity of legacy systems to integrate with or a need for complicated macros in Excel, why would you look to the ageing software giant?

Doesn’t it make sense to go with cool, simple Software-as-a-Service solutions like G Suite, Dropbox or Slack?

Google is the single largest alternative force to Microsoft and Office 365 out there, but if you were to begin building an alternative stack, what might it look like exactly?

Mail and Calendar

Google smashed the consumer email market with Gmail and Google Calendar, quickly becoming more favoured than Outlook.com. You’ll find both products in the business-grade G Suite plan, along with eDiscovery and archiving capabilities. They have mobile apps (iOS and Android) and web browser access from the desktop.

Not surprisingly, Google Chrome is the preferred browser for full functionality, including offline access. Yes, you can use Chrome to view and send emails when you’re disconnected (if enabled by your admin), if you’ve installed the Chrome plugin and synced it first. By default, it only syncs seven days’ worth of emails and tweaking a setting will give you one month of data, max.

There’s also an offline limitation of only being able to send attachments that are less than 5MB in size. In the Google world, stars replace email flags and labels kind of replace folders. Messages can have multiple labels so they show up in multiple places, including staying in your Inbox and also displaying a personal label. Remove the Inbox label (via Move To) and your email is now only in one folder (which is actually a label view).

Calendar is pretty standard, easily sharable and it integrates with Google Hangouts (like Skype integrates with Exchange online) for scheduling & displaying online meetings. Third-party vendors jumped on G Suite integration quickly, but the gap is closing. It’s getting harder to find apps that integrate with G Suite and don’t also talk to Office 365, but they do exist. Pipedrive, Freshdesk and Mavenlink all prefer to talk to Google products.

Add-in Rapportive adds Microsoft-owned LinkedIn information to Gmail, but not to Office 365 (you’ll need something like Full Contact for that). Hopefully LinkedIn integration is something that Microsoft will nail, but we’re still waiting.

Documents, spreadsheets and presentations

For this Cloud-based discussion, we’ll leave LibreOffice out of it. Google’s Docs, Sheets and Slides allow standard word processing, number crunching and presentation templates in your browser. Want to email them? You’ll be sending a link to the file’s web location in Google Drive and the recipient will need a Google Account to even view them.

Google will argue that everyone has a Google account and if you do, it’s a very simple process to share and co-edit your files, including with people outside your organisation. Offline access is achieved through a Google Chrome plugin and you get a pseudo-form of the file, not something you can actually copy across to a USB stick. Crossing the streams here is not as effective as Google will have you believe. Yes, you can use Docs to open a Word doc and Sheets to open an Excel spreadsheet, but the formatting can be compromised depending on the complexity of the contents.

The worst “feature” is the way Google Drive handles editing of Office files. It creates a separate file that’s Google-compatible, every time you edit a Microsoft Office file. Because they are separate files, you have no version history tracking and you get multiple files with the same filename in your Drive folder. It also doesn’t lock the original Office file or support co-authoring, so your colleagues can make their own changes and save their own versions at the same time. This isn’t a problem if you live in a purely Google world, but I’ve seen finance departments of Googlefied companies cling to Excel.

Posted on in category News

London is the second city to get free gigabit WiFi kiosks

The content below is taken from the original (London is the second city to get free gigabit WiFi kiosks), to continue reading please visit the site. Remember to respect the Author & Copyright.

London’s countless telephone boxes become more redundant with every new mobile contract signed and throwaway tourist SIM purchased. Having a mind to update these payphones for the modern age, BT — which owns the majority of them — announced last year it had teamed up with the same crew behind New York’s LinkNYC free gigabit WiFi kiosks to make that happen. The first of these, installed along London’s Camden High Street, have been switched on today, offering the fastest public WiFi around, free phone calls, USB charging, maps, directions and other local info like weather forecasts, Tube service updates and community messages.

While the London kiosks have a slightly different name (InLinks as opposed to just Links), they are identical in what they offer, and are also funded entirely by advertising revenue generated from the large screens on either side of the monoliths. Intersection — the affiliate of Alphabet’s Sidewalk Labs that leads the Link projects — decided not to enable free internet access through the kiosks’ in-built tablets in its second city, though. This feature had to be disabled in New York, you might remember, due to a public porn problem.

Like the LinkNYC program, later plans for the UK’s next-gen phone boxes include temperature, traffic, air and noise pollution sensors. The idea being the environmental monitoring aspect will create the data streams needed for future smart city projects. New York City now hosts almost 900 free gigabit booths, with "thousands more" to be installed over the next few years. By comparison, London’s starting small with only a handful of cabinets along one major street, but many more are expected to spring up around the capital and in other large UK cities before the year’s out.

Source: BT, InLinkUK

Posted on in category News

TEMPEST In A Software Defined Radio

The content below is taken from the original (TEMPEST In A Software Defined Radio), to continue reading please visit the site. Remember to respect the Author & Copyright.

In 1985, [Wim van Eck] published several technical reports on obtaining information the electromagnetic emissions of computer systems. In one analysis, [van Eck] reliably obtained data from a computer system over hundreds of meters using just a handful of components and a TV set. There were obvious security implications, and now computer systems handling highly classified data are TEMPEST shielded – an NSA specification for protection from this van Eck phreaking.

Methods of van Eck phreaking are as numerous as they are awesome. [Craig Ramsay] at Fox It has demonstrated a new method of this interesting side-channel analysis using readily available hardware (PDF warning) that includes the ubiquitous RTL-SDR USB dongle.

The experimental setup for this research involved implementing AES encryption on two FPGA boards, a SmartFusion 2 SOC and a Xilinx Pynq board. After signaling the board to run its encryption routine, analog measurement was performed on various SDRs, recorded, processed, and each byte of the key recovered.

The results from different tests show the AES key can be extracted reliably in any environment, provided the antenna is in direct contact with the device under test. Using an improvised Faraday cage constructed out of mylar space blankets, the key can be reliably extracted at a distance of 30 centimeters. In an anechoic chamber, the key can be extracted over a distance of one meter. While this is a proof of concept, if this attack requires direct, physical access to the device, the attacker is an idiot for using this method; physical access is root access.

However, this is a novel use of software defined radio. As far as the experiment itself is concerned, the same result could be obtained much more quickly with a more relevant side-channel analysis device. The ChipWhisperer, for example, can extract AES keys using power signal analysis. The ChipWhisperer does require a direct, physical access to a device, but if the alternative doesn’t work beyond one meter that shouldn’t be a problem.

Posted on in category News

Shark Week: 6 Tips to Secure Your IT Tackle Box

The content below is taken from the original (Shark Week: 6 Tips to Secure Your IT Tackle Box), to continue reading please visit the site. Remember to respect the Author & Copyright.

shark-6tips 

Article Written by Erik Brown, CTO at GigaTrust

Scientists recently
dispelled the myth that sharks attack humans because they mistake them for
other prey. In fact, sharks can see clearly below the murky waters. But,
it’s not as easy for victims of phishing attacks to see what’s lurking behind
an attached document or link within an email.

Email is the lifeblood of
communications for organizations around the world. Among the 296 billion emails
sent daily, there are dangerous emails lurking within. A successful email
attack can cost companies as much as $4 million per incident. In honor of Discovery Channel’s upcoming Shark Week, let’s
look at what these dangerous and misunderstood creatures can teach us about
email and document security.

Beware of Phishing Attacks: Phishing attacks use “bait” to catch their victims and can cause significant
damage. The 2016 DNC Hack, for example, was a pretty large bite: a
leak of 19,252 emails and 8,034 attachments. Like a good fisherman,
organizations should test their lines in advance by training their employees
and conducting mock attackts. To minimize the damage of a leak, a security system that enables encrypted email and security document collaboration should be considered.

Know the Landscape: There
are over 400 species of sharks wordwide, and 2016 had a record number of shark
attacks and bites (107). Just as most beaches are safe, emails are a
common part of business and are generally benign. As vacationers flock to
beaches this summer, they should swim with confidence yet be aware of their
surroundings. Don’t venture into deep water alone, and use the buddy system to
keep track of your family and friends. Employees should send and read their
emails with confidence as well, and have the ability to secure critical (deep
water) emails sent both inside and outside the company. A secure collaboration system that
provides anyone-to-anyone secure document sharing can ensure that critical content
is protected from harmful attacks.

Confidential Documents are Blood in the Water: Sharks have a very acute
sense of smell and detect injured creatures from miles away. They prey on a
variety of sea life and their attack can be swift and vicious. Hackers send
phishing attacks across an entire organization and when they detect an entry
point, they pounce. When
employees email confidential documents, the sensitive information can fall prey
 to these attacks and cause massive
damage. Enterprises can further improve security by encrypting confidential
information on disk (at rest), during communication (in transit), and while
viewing and editing (in use).

Just Keep Swimming: Some
species of sharks have
to move constantly to survive
. Hackers are constantly
growing new teeth in the form of ever more sophisticated attacks, so IT
administrators should stay on top of the latest security news and threats.  Applying security updates and evolving
enterprise systems will help stay ahead of possible attacks.

Analyze the Depths: A
shark’s body is supported by cartilage rather than bones, which helps them swim
comfortably at multiple depths of water., Security professionals can get
comfortable with the information they track, but hackers are swimming at
multiple depths. Look for ways to gather and analyze new types of data to help
detect malicious activities. Tracking the movement of and interaction with confidential
email and documents is one way to gain insight into behavior across an
organization. This and other behavior analytics can alert administrators to
suspicious activity when an attack is in progress or before it really begins.

Layers of Personalities: Recent studies have indicated that sharks can have
distinct personalities. Good fishermen know this. They ensure their bait and
tackle is ready; they know which type of bait will lure different fish or
sharks; the understand the strength of their lines and tackle. Enterprises also
need to be prepared to protect their employees and information, especially as
corporate data is increasingly accessed by remote employees and contractors on mobile
devices. It’s virtually impossible
for an enterprise to oversee the security and usage of every access point into
the enterprise, and breaches can happen when individual files are viewed or
shared. Adopting a layered security approach that considers different entry
points and scenarios provides broad protection for the organization. While
preventing attacks is the best option, be prepared to detect and respond to
possible attacks that your prevention systems might miss. If a hacker gains
access to critical internal systems, is the organization prepared?  Is data secure and access restricted within
the corporate network?

IT
professionals navigate a sea of potential threats, and they never know when a
shark may be lurking just out of sight. The ideas presented here will help
enterprises prepare for the hackers (sharks) that may be swimming in your part
of the Internet.

##

About the Author

Erik-Brown 

Erik Brown joined GigaTrust in 2017 as Chief Technology Officer where he is responsible for the IT, engineering, and customer service functions.  He has over 25 years’ experience working with new and emerging technologies, most recently with mobile development. Erik’s career includes technology positions in successful start-ups and Fortune 500 companies. He has worked as a developer, architect, and leader in mobile development, digital imaging, Internet search, and healthcare. He also brings his experience with patent development, and as a technical author and conference speaker to the company.

Prior to joining GigaTrust, Erik served as an Associate Vice President, Innovation and Delivery Services in Molina Healthcare’s IT department where he oversaw a team of 40 people focused on improving and standardizing the use of new technology. He spearheaded the development and deployment of Molina’s first mobile application for home-based assessments, and created an internal Incubator program for identifying and funding new ideas within the IT department. Erik also worked as Program Manager and Architect in Unisys Corporation’s Federal Systems group as well as at several successful start-up companies, including Transarc Corporation (purchased by IBM in 1994) and PictureVision, Inc. (purchased by Eastman Kodak in 2000).

Erik is the author of two well-received books on Windows Forms programming, and has spoken at numerous conferences including the 2014 mHealth Summit. He is a graduate of the Society for Information Management’s Regional Leadership Forum, and is a certified project manager and scrum master (PMP, PMI-RMP, CSM, and ITIL). Erik holds a BS and MS degree in Mathematics from Carnegie-Mellon University.

Posted on in category News

Brad Dickinson | Adding custom intelligence to Gmail with serverless on GCP

Adding custom intelligence to Gmail with serverless on GCP

The content below is taken from the original ( Adding custom intelligence to Gmail with serverless on GCP), to continue reading please visit the site. Remember to respect the Author & Copyright.

If you are using G Suite at work, you probably have to keep track of tons of data spread across Gmail, Drive, Docs, Sheets and Calendar. If only there was a simple, but scalable way to tap this data and have it nudge you based on signals of your choice. In this blog post, we’ll show you how to build powerful Gmail extensions using G Suite’s REST APIs, Cloud Functions and other fully managed services on Google Cloud Platform (GCP).

gsute_cloudfunction.png

There are many interesting use cases for GCP and G Suite. For example, you could mirror your Google Drive files into Cloud Storage and run it through the Cloud Data Loss Prevention API. You could train your custom machine learning model with Cloud AutoML. Or you might want to export your Sheets data into Google BigQuery to merge it with other datasets and run analytics at scale. In this post, we’ll use Cloud Functions to specifically talk to Gmail via its REST APIs and extend it with various GCP services. Since email remains at the heart of how most companies operate today, it’s a good place to start and demonstrate the potential of these services working in concert.

Architecture of a custom Gmail extension

High email volumes can be hard to manage. Many email users have some sort of system in place, whether it’s embracing the “inbox zero,” setting up an elaborate system of folders and flags, or simply flushing that inbox and declaring “email bankruptcy” once in a while.

Some of us take it one step further and ask senders to help us prioritize our emails: consider an auto-response like “I am away from my desk, please resend with URGENT123 if you need me to get back to you right away.” In the same vein, you might think about prioritizing incoming email messages from professional networks such as LinkedIn by leaving a note inside your profile such as “I pay special attention to emails with pictures of birds.” That way, you can auto-prioritize emails from senders who (ostensibly) read your entire profile.

gmail_star.png
The email with a picture of an eagle is starred, but the one with a picture of a ferret is not.

Our sample app does exactly this, and we’re going to fully describe what this app is, how we built it, and walk through some code snippets.

Here is the architectural diagram of our sample app:

architectural_diagram.png

There are three basic steps to building our Gmail extension:

Gmail_extension.png

Without further ado, let’s dive in!

How we built our app

Building an intelligent Gmail filter involves three major steps, which we’ll walk through in detail below. Note that these steps are not specific to Gmail—they can be applied to all kinds of different G Suite-based apps.

Step 1. Authorize access to G Suite data

The first step is establishing initial authentication between G Suite and GCP. This is a universal step that applies to all G Suite products, including Docs, Slides, and Gmail.

GSuite_data.png

In order to authorize access to G Suite data (without storing the user’s password), we need to get an authorization token from Google servers. To do this, we can use an HTTP function to generate a consent form URL using the OAuth2Client:

This function redirects a user to a generated URL that presents a form to the user. That form then redirects to another “callback” URL of our choosing (in this case, oauth2callback) with an authorization code once the user provides consent. We save the auth token to Cloud Datastore, Google Cloud’s NoSQL database. We’ll use that token to fetch email on the user’s behalf later. Storing the token outside of the user’s browser is necessary because Gmail can publish message updates to a Cloud Pub/Sub topic, which triggers functions such as onNewMessage. (For those of you who aren’t familiar with Cloud Pub/Sub, it’s a GCP distributed notification and messaging system that guarantees at-least-once delivery.)

Let’s take a look at the oauth2callback function:

This code snippet uses the following helper functions

  • getEmailAddress: gets a user’s email address from an OAuth 2 token

  • fetchToken: fetches OAuth 2 tokens from Cloud Datastore (and auto-refreshes them if they are expired)

  • saveToken: saves OAuth 2 tokens to Cloud Datastore

Now, let’s move on to subscribing to Gmail changes by initializing a Gmail watch.

Step 2: Initialize a ‘watch’ for Gmail changes

watch_gmail_changes.png

Gmail provides the watch mechanism for publishing new email notifications to a Cloud Pub/Sub topic. When a user receives a new email, a message will be published to the specified Pub/Sub topic. We can use this message to invoke a Pub/Sub-triggered Cloud Function that processes the incoming message.

In order to start listening to incoming messages, we must use the OAuth 2 token that we obtained in Step 1 to first initialize a Gmail watch on a specific email address,  as shown below. One important thing to note is that the G Suite API client libraries don’t support promises at the time of writing, so we use pify to handle the conversion of method signatures.

Now that we have initialized the Gmail watch, there is an important caveat to consider: the Gmail watch only lasts for seven days and must be re-initialized by sending an HTTP request containing the target email address to initWatch. This can be done either manually or via a scheduled job. For brevity, we used a manual refresh system in this example, but an automated system may be more suitable for production environments. Users can initialize this implementation of Gmail watch functionality by visiting the oauth2init function in their browser. Cloud Functions will automatically redirect them (first to oauth2callback, and then initWatch) as necessary.

We’re ready to take action on the emails that this watch surfaces!  

Step 3: Process and take action on emails

Now, let’s talk about how to process incoming emails. The basic workflow is as follows:

take_action.png

As discussed earlier, Gmail watches publish notifications to a Cloud Pub/Sub topic whenever new email messages arrive. Once those notifications arrive, we can fetch the new message IDs using the gmail.users.messages.list, and their contents using the gmail.users.messages.get API call. Since we’re only processing email content once, we don’t need to store any data externally.

Once our function extracts the images within an email, we can use the Cloud Vision API to analyze these images and check if they contain a specific object (such as birds or food). The API returns a list of object labels (as strings) describing the provided image. If that object label list contains bird, we can use the gmail.users.messages.modify API call to mark the message as “Starred”.

This code snippet uses the following helper functions

  • listMessageIds: fetches the most recent message IDs using gmail.users.messages.list

  • getMessageById: gets the most recent message given its ID using gmail.users.messages.get

  • getLabelsForImages: detects object labels in each image using the Cloud Vision API

  • labelMessage: adds a Gmail label to the message using gmail.users.messages.modify

Please note that we’ve abstracted most of the boilerplate code into the helper functions. If you want to take a deeper dive, the full code can be seen here along with the deployment instructions.

Wrangle your G Suite data with Cloud Functions and GCP

To recap, in this tutorial we built a scalable application that processes new email messages as they arrive to a user’s Gmail inbox and flags them as important if they contain a picture of a bird. This is of course only one example. Cloud Functions makes it easy to extend Gmail and other G Suite products with GCP’s powerful data analytics and machine learning capabilities—without worrying about servers or backend implementation issues. With only a few lines of code, we built an application that automatically scales up and down with the volume of email messages in your users’ accounts, and you will only pay when your code is running. (Hint: for small-volume applications, it will likely be very cheap—or even free—thanks to Google Cloud Platform’s generous Free Tier.)

To learn more about how to programmatically augment your G Suite environment with GCP, check out our Google Cloud Next ‘18 session G Suite Plus GCP: Building Serverless Applications with All of Google Cloud, for which we developed this demo. You can watch that entire talk here:

G Suite Plus GCP: Building Serverless Applications with All of Google Cloud (Cloud Next '18)

You can find the source code for this project here.

Happy building!