Your Phone App: Mirror content from Android phone to Windows 10 PC

The content below is taken from the original ( Your Phone App: Mirror content from Android phone to Windows 10 PC), to continue reading please visit the site. Remember to respect the Author & Copyright.

Windows 10 v1809 came with a lot of improvements and additions. There is some good news for Android users too. You can easily connect your Android phone with Windows 10 using the all-new Your Phone app. This feature was highly […]

This post Your Phone App: Mirror content from Android phone to Windows 10 PC is from TheWindowsClub.com.

Stephen Hawking’s last paper on black holes is now online

The content below is taken from the original ( Stephen Hawking’s last paper on black holes is now online), to continue reading please visit the site. Remember to respect the Author & Copyright.

Stephen Hawking never stopped trying to unravel the mysteries surrounding black holes — in fact, he was still working to solve one of them shortly before his death. Now, his last research paper on the subject is finally available online through pre-…

Looking back at Google+

The content below is taken from the original ( Looking back at Google+), to continue reading please visit the site. Remember to respect the Author & Copyright.

Google+ is shutting down at last. Google announced today it’s sunsetting its consumer-facing social network due to lack of user and developer adoption, low usage and engagement. Oh, and a data leak. It even revealed how poorly the network is today performing, noting that 90 percent 90% of Google+ user sessions are less than five seconds long. Yikes.

But things weren’t always like this. Google+ was once heralded as a serious attempt to topple Facebook’s stranglehold on social networking, and was even met with excitement in its first days.

2011

June: The unveiling Unveiling

The company originally revealedits new idea for social networking in June 2011. It wasn’t Google’s first foray into social, however. Google had made numerous attempts to offer a social networking service of some sort, with Orkut, launched in 2004 and shuttered in fall 2014; Google Friend Connect in 2008 (retired in 2012); 2012), and Google Buzz in 2010 (it closed the next year).

But Google+ was the most significant attempt the company had made, proclaiming at the time: “we believe online sharing is broken.”

The once top-secret project was the subject of several leaks ahead of its launch, allowing consumer interest in the project to build.

Led by Vic Gundotra and Bradley Horowitz, Google’s big idea to fix social was to get users to create groups of contacts — called “Circles” — – called “Circles” – in order to have more control over social sharing. That is, there are things that are appropriate for sharing with family or close friends, and other things that make more sense to share with co-workers, classmates coworkers, classmates, or those who share a similar interest like biking or cooking, for example.

But getting users to create groups is difficult because the process can be tedious. Google, instead, cleverly designed a user interface that made organizing contacts feel simpler even fun, some argued. It also was was also better than the system for contact organization that Facebook was offering at the time.

Next thing you know, everyone was setting up their Circles by dragging-and-dropping little profile icons into these groups, and posting updates and photos to their newly created micro-networks.

Another key feature, “Sparks,” helped users find news and content related to a user’s particular interests. This way, Google could understand what people liked and wanted to track, without having an established base of topical pages for users to “Like,” as on Facebook. But it also paved the way for a new type of search. Instead of just returning a list of blue links, a search on Google+ could return people’s profiles who were relevant to the topic at hand, matching pages pages, and other content.

Google+ also introduced Hangouts, a way to video chat with up to 10 people in one of your Circles at once.

At the time, the implementation was described as almost magical.This was due to a number of innovative features, like the way the software focused in on the person talking, for example, and the way everyone could share content within a chat.

Early growth looked promising

Within two weeks, it seemed Google had a hit on its hands, as the network had reached 10 million users. Just over a month after launch, it had grown to 25 million. By October 2011, it reached 40 million. And by year-end, 90 million. Even if Google was only tracking sign-up numbers, it still appeared like a massive threat to Facebook.

Facebook CEO Mark Zuckerberg’s first comment about Google+, however, smartly pointed out that any Facebook competitor will have to build up a social graph to be relevant. Facebook, which had 750 million users at the time, had already done this. Google+ was getting the sign-ups, but whether users would remain active over time was still in question.

There also were were also early signs that Google+’s embrace of non-friends could be challenging. It had to roll out blocking mechanisms months after launch, as the network became too spammy with unwanted notifications. Over the years that followed, its inability to control the spam became a major issue.

Even as late at 2017, people were still complaining that spam made Google+ unusable.

July: Backlashes over brands and Real Names policy

In an effort to compete with Facebook, Google+ also enforced a “real names” policy. This angered many users who wanted to use pseudonyms or nicknames, especially when Googlebegan deleting their accounts for non-compliance. This was a larger issue than merely losing social networking access, because losing a Google account meant losing Gmail, Documents, Calendar and access to other Google products, too.

The company also flubbed its handling of brands’ pages, banning all Google business profiles in an ill-conceived fashion something it later admitted was a mistake.

It wouldn’t fix some of these problems for years, in fact. Eric Schmidt even reportedly once suggestedfinding another social networkif you didn’t want to use your real name a comment that came across as condescending.

August: Social search Search

Google+ came to Google Search in August. The company announced Google+ posts would begin appearing in “social search” results that showed when users were signed in. Google called this new toggle “search plus your world.” But its slice of “your world” was pretty limited, as since it couldn’t see into the posts shared among your friends and followers on Facebook and Twitter.

2012

January: Forced Google+ account creation

If you can’t beat ’em, force ’em! Google began to require users to have a Google+ account in order to sign up sign-up for Gmail.It was not a user-friendly change, and was the start of a number of forced integrations to come.

March: Criticism mounts

TechCrunch’s Devin Coldewey argued that Google failed to play the long game in social, and was too ambitious in its attempt with Google+. All the network really should have started with was its “+1” button the clicks would generate piles of data tied to users that could then be searchable, private by default default, and shareable elsewhere.

June: Event spam goes viral

Spam remained an issue on Google+. This time, event spam had emerged, thanks to all the nifty integrations between Google+ and mission-critical products like Calendar.

Users were not thrilled that other people were able to “invite” them to events, and these automatically showed up on your Calendar even if you had not yet confirmed that you would be attending. It made using Google+ feel like a big mistake.

November: Hangouts evolves

The following year after Google+’s launch, there was alreadya lot of activity around Hangouts which interestingly, has since become one of the big products that will outlive its original Google+ home.

Video was a tough space to get right which is why businesses like Skype were still thriving. And while Hangouts were designed for friends and family to use in Google+, Google was already seeing companies adopt the technology for meetings, and brands like the NBA for connecting with fans.

December: Google+ adds Communities

The focus on user interests in Google+ also continued to evolve this year with the launch of Communities a way for people to set up topic-based forums on the site. The move was made in hopes of attracting more consumer interest, as growth had slowed.

2013

It’s not a destination; it’s a “social layer!” 

Google+ wasn’t working out as a “Facebook killer.” Engagement was low, distribution was mixed mixed, and it seemed it was only being used by tech early adopters, not the mainstream. So the new plan was to double down on Google+ not being a destination website, like Facebook, but rather make it a social layer across Google products.

It had already integrated Google+ with Gmail and Google Contacts, shortly after its launch. In June 2013,it offered a way for people to follow brands’ pages in Gmail.

It then decided to unify Google Talk (aka Gchat) with Google+ Messenger into Hangouts.

It launched a Google+ commenting system for Blogger.

It replaced Google sign-ins on third-party sites with Google+ logins.

It was all a bit much.

September: Google+ infiltrates YouTube

Then, most controversially, it took over YouTube comments. Now, if you wanted to comment on YouTube, you needed a Google+ account.

In other words, if Gmail’s then 200+ million users could juice up Google+, then maybe YouTube’s millions of commenters could, Google hoped.

People were not happy, to say the least.

It was a notable indication of how little love people had for Google+. YouTubers were downright pissed. One girl even crafted a profane music video in response, with lyrics like “You ruined our site and called it integration / I’m writing this song just to vent our frustration / Fuck you, Google Plusssssss!”

Google also started talking about Google+ as an “identity layer” with 500 million users to make it sound big.

2014

April: Vic Gundotra, Father of Google+, leaves Google

Google+ lost its founder. In April 2014, it was announced that Vic Gundotra, the father of Google+, was leaving the company. Google CEO Larry Page said at the time that the social network would still see investment, but it was a signal that a shift was coming in terms of Google’s approach.

Former TechCrunch co-editor Alexia Bonatsos (née Tsotsis) and editor Matthew Panzarino wrote at the time that Google+ was “walking dead,” having heard that Google+ was no longer going to be considered a product, but a platform.

The forced integrations of the past would be walked back, like those in Gmail and YouTube, and teams would be reshuffled.

July: Hangouts breaks free

Perhaps one of the most notable changes was letting Hangouts go free. Hangouts was a compelling product too important to require a tie to Google+. In July 2014, Hangouts began to work without a Google+ account, rolled out to businesses and got itself an SLA.

July: Google+ drops its “real name” real name rule and apologizes

Another signal that Google+ was shifting following Gundotra’s exit was when it abandoned its “real name” policy, three years after the user outrage.

While Google had started rolling back on the real name policy in January of 2012 by opening rules to include maiden names and select nicknames, it still displayed your real name alongside your chosen name. It was nowhere near what people wanted.

Now, Google straight-up straight up apologized for its decision around real names and hoped the change would bring users back. It did not. It was too late.

2015

May: Google Photos breaks free

Following Hangouts, Google realized that Google+’s photo-sharing features also deserved to become their own, standalone product.

At Google I/O 2015, the company announced its Google Photos revamp. The new product took advantage of AI A.I. and machine learning capabilities that originated on Google+. This included allowing users to search photos for persons, places and things, as well as an update on Google+’s “auto awesome” feature, which turned into the more robust Google Photos Assistant.

Later that year, Google Photos had scaled to 100 million monthly active users, after shutting down Google+ Photos in August 2015.

July: Google+ pulled from YouTube

In July 2015, Google reversed course on YouTube integrations with Google+ so YouTube comments stayed on YouTube, and not on Google+.

People were happy about this. But not happy enough to go back to Google+.

November: An all-new Google+ unveiled All-New Google+ Unveiled

Google+ got a big revamp in November 2015.

Bradley Horowitz, VP, Photos and Streams at Google and Product Director at Google, Luke Wroblewski, had teamed up to redesign Google+ around what Google’s data indicated was working: Communities and Collections. Essentially, the new Google+ was focused on users and their interests. It let people network around topics, but not necessarily their personal connections.

Google also rolled out “About Me” pages as an alternative to sites like About.me.

The new site got a colorful coat of paint, too, but it never regained traction.

2016

January: Google+ pulled from Android Gaming service

Google decoupled Google+ from another core product bydropping the requirement to have an account with the social network in order to use the Google Play Games services.

August: Google+ pulled from Play Store

The unbundling continued, as Google’s Play Store stopped requiring users to have a Google+ account to write reviews.

Horowitz explained at the time that Google had heard from users “that it doesn’t make sense for your Google+ profile to be your identity in all the other Google products you use,” and it was responding accordingly.

August: Hangouts on Air moved to YouTube Live

One of the social network’s last exclusive features, Hangouts on Air a way to broadcast a Hangout moved to YouTube Livein 2016, as well.

2017

Google+ went fairly quiet. The site was still there, but the communities were filling with spam. Community moderators said they couldn’t keep up. Google’s inattention to the problem was a signal in and of itself that the grand Google+ experiment may be coming to a close.

January: Classic design phased out

Google+ forced the change over to the new design first previewed in late 2015.

In January 2017, it no longer allowed users to switch back to the old look. It also took the time to highlight groups that were popular on Google+ to counteract the narrative that the site was “dead.” (Even though it was.)

August: Google+ removed share count from +1 button

The once ubiquitous “+1” button,  button launched in spring 2012, was getting a revamp. It would no longer display the number of shares. Google said this was to make the button load more quickly. But it was really because the share counts were not worth touting anymore.

2018

October 2018: Google+ got its Cambridge Analytica moment

A security bug allowed third-party developers to access Google+ user profile data since 2015 until Google discovered it in March, but decided not to inform users. In total, 496,951 users’ full names, email addresses, birth dates, gender, profile photos, places lived, occupation and relationship status were potentially exposed. Google says it doesn’t have evidence the data was misused, but it decided to shut down the consumer-facing Google+ site anyway, given its lack of use.

Data misuse scandals like Cambridge Analytica have damaged Facebook and Twitter’s reputations, but Google+ wasn’t similarly impacted. After all, Google was no longer claiming Google+ be a social network. And, as its own data shows, the network that remained was largely abandoned.

But the company still had piles of user profile data on hand, which were put at risk. That may lead Google to face a similar fate as the more active social networks, in terms of being questioned by Congress or brought up in lawmakers’ discussions about regulations.

In hindsight, then, maybe it would have been better if Google had shut down Google+ years ago.

Alexa can now reserve conference rooms

The content below is taken from the original ( Alexa can now reserve conference rooms), to continue reading please visit the site. Remember to respect the Author & Copyright.

Amazon is debuting a new feature that will allow businesses to use Alexa for booking conference rooms. The addition is part of the Alexa for Business platform, and works with linked calendars from either Google’s G Suite or Microsoft Exchange, as well as over an API, arriving soon.

The feature is part of Amazon’s broader plan to put Alexa to work outside the home. At last year’s AWS re:Invent conference, Amazon first launched its Alexa for Business platform to allow companies to build out their own skills and integrations for practical business use cases. Amazon also spoke of integrations that would allow Alexa to support productivity tools and enterprise services, including those from Microsoft, Concur, Splunk, and others.

Shortly after, early partner WeWork integrated Echo devices in some of its own meeting rooms to test out how the smart assistant could be useful for things like managing meeting room reservations, or shutting off or turning on lights.

Now, Amazon wants to make booking rooms themselves possible just by asking Alexa.

As the company explains, it’s common in workplaces for people to walk from room to room to grab a space for an ad-hoc meeting, or to find a space for a meeting that’s running over. But to reserve the room, they often have to pull out their laptop, run an application, do a search, and then look through the search results to find an available room. The Room Booking skill will allow them to ask Alexa for help instead.

The feature requires read/write permission to users’ calendar provider to enable, but can then be used to check the availability of the conference room you’re in, by asking “Alexa, is this room free?”

Users can then schedule the room on the fly by saying, “Alexa, book this room for half an hour,” or whatever time you choose.

Alexa will also be able to confirm if the room is booked, when asked “Alexa, who booked this room?”

Amazon is making this functionality available by way of a Room Booking API, too, which is soon arriving in beta. This will allow businesses to integrate the booking feature with their own in-house or third-party booking solutions. Some providers, including Joan and Robin are already building a skill to add voice support to their own offerings, Amazon noted.

The feature is now one of several on the Alexa for Business platform, specifically focused on better managing meetings with Alexa’s assistance. Another popular feature is using Alexa to control conference room equipment, so you can start meetings by saying “Alexa, join the meeting.”

A handful of large companies have since adopted Alexa in their own workplaces, following the launch of the Alexa For Business platform, including Condé Nast, Valence, Capital One, and Brooks Brothers. And the platform itself is one of many ways Amazon is contemplating as to how Alexa can be used outside the home. It has also launched Alexa for Hospitality and worked with colleges on putting Echo Dots in student dorms. It also last month introduced its first Alexa device for vehicles.

 

New features in Notepad in Windows 10 v1809

The content below is taken from the original ( New features in Notepad in Windows 10 v1809), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft has updated the good old Notepad app in Windows 10 v 1809. The humble Notepad in Windows is a very basic text editor you can use for simple documents.  Let us take a look at the new features in […]

This post New features in Notepad in Windows 10 v1809 is from TheWindowsClub.com.

RISC OS changes hands

The content below is taken from the original ( RISC OS changes hands), to continue reading please visit the site. Remember to respect the Author & Copyright.

And unlike the post on 1st April, this one is NOT a joke! It has just been announced that RISC OS Developments Ltd, the company formed last year by Andrew Rawnsley and Richard Brown, has acquired Castle Technology Ltd, and with it RISC OS itself. The operating system was originally developed by Acorn Computers for […]

BlackBerry races ahead of security curve with quantum-resistant solution

The content below is taken from the original ( BlackBerry races ahead of security curve with quantum-resistant solution), to continue reading please visit the site. Remember to respect the Author & Copyright.

Quantum computing represents tremendous promise to completely alter technology as we’ve known it, allowing operations that weren’t previously possible with traditional computing. The downside of these powerful machines is that they could be strong enough to break conventional cryptography schemes. Today, BlackBerry announced a new quantum-resistant code signing service to help battle that possibility.

The service is meant to anticipate a problem that doesn’t exist yet. Perhaps that’s why BlackBerry hedged its bets in the announcement saying, “The saying,”The new solution will allow software to be digitally signed using a scheme that will be hard to break with a quantum computer.” Until we have fully functioning quantum computers capable of breaking current encryption, we probably won’t know for sure if this works.

But give BlackBerry credit for getting ahead of the curve and trying to solve a problem that has concerned technologists as quantum computers begin to evolve. The solution, which will be available next month, is actually the product of a partnership between BlackBerry and Isara Corporation, a company whose mission is to build quantum-safe security solutions. BlackBerry is using Isara’s cryptographic libraries to help sign and protect code as security evolves.

“By adding the quantum-resistant code signing server to our cybersecurity tools, we will be able to address a major security concern for industries that rely on assets that will be in use for a long time. If your product, whether it’s a car or critical piece of infrastructure, needs to be functional 10-15 years from now, you need to be concerned about quantum computing attacks,” Charles Eagan, BlackBerry’s chief technology officer, Eagan BlackBerry’s Chief Technology Officer said in a statement.

While experts argue how long it could take to build a fully functioning fully-functioning quantum computer, most agree that it will take between 50 and 100 qubit computers to begin realizing that vision. IBM released a 20 qubit Qubit computer last year and introduced a 50 qubit prototype. A qubit Qubit prototype. A Qubit represents a single unit of quantum information.

At TechCrunch Disrupt last month, Dario Gil, IBM’s vice president of artificial intelligence and quantum computing, and Chad Rigetti, a former IBM researcher who is founder and CEO at Rigetti Computing, predicted we could be just three years away from the point where a quantum computer surpasses traditional computing.

IBM Quantum Computer

IBM Quantum Computer. Photo: IBM

Whether it happens that quickly or not remains to be seen, but experts have been expressing security concerns around quantum computing as they grow more powerful, and BlackBerry is addressing that concern by coming up with a solution today, arguing that if you are creating critical infrastructure you need to future-proof your security.

BlackBerry, once known for highly secure phones, and one of the earliest popular business smartphones, popular, business smart phones, has pivoted to be more of a security company in recent years. This announcement, made at the BlackBerry Security Summit, is part of the company’s focus on keeping enterprises secure.

How the data collected by dockless bikes can be useful for cities (and hackers)

The content below is taken from the original ( How the data collected by dockless bikes can be useful for cities (and hackers)), to continue reading please visit the site. Remember to respect the Author & Copyright.

In the 18 months or so since dockless bike-share arrived in the US, the service has spread to at least 88 American cities. (On the provider side, at least 10 companies have jumped into the business; Lime is one of the largest.) Some of those cities now have more than a year of data related to the programs, and they’ve started gleaning insights and catering to the increased number of cyclists on their streets.

Technology Review writer Elizabeth Woyke looks at ways how city planners in Seattle, WA and South Bend, IN use the immense stream of user-generated location data from dockless-bike-sharing programs to improve urban mobility — and how hackers could potentially access and abuse this (supposedly anonymous) information. “In theory, the fact that people can park dockless bikes outside their exact destinations could make it easier for someone who hacked into the data to decode the anonymous identities that companies assign their users,” Woyke writes.

California bans default passwords on any internet-connected device

The content below is taken from the original ( California bans default passwords on any internet-connected device), to continue reading please visit the site. Remember to respect the Author & Copyright.

In less than two years, anything that can connect to the internet will come with a unique password — that is, if it's produced or sold in California. The "Information Privacy: Connected Devices" bill that comes into effect on January 1, 2020, e…

D-Wave offers the first public access to a quantum computer

The content below is taken from the original ( D-Wave offers the first public access to a quantum computer), to continue reading please visit the site. Remember to respect the Author & Copyright.

Outside the crop of construction cranes that now dot Vancouver’s bright, downtown greenways, in a suburban business park that reminds you more of dentists and tax preparers, is a small office building belonging to D-Wave. This office — squat, angular and sun-dappled one recent cool Autumn morning — is unique in that it contains an infinite collection of parallel universes.

Founded in 1999 by Geordie Rose, D-Wave worked in relative obscurity on esoteric problems associated with quantum computing. When Rose was a PhD student at the University of British Columbia, he turned in an assignment that outlined a quantum computing company. His entrepreneurship teacher at the time, Haig Farris, found the young physicists ideas compelling enough to give him $1,000 to buy a computer and a printer to type up a business plan.

The company consulted with academics until 2005, when Rose and his team decided to focus on building usable quantum computers. The result, the Orion, launched in 2007, and was used to classify drug molecules and play Sodoku. The business now sells computers for up to $10 million to clients like Google, Microsoft and Northrop Grumman.

“We’ve been focused on making quantum computing practical since day one. In 2010 we started offering remote cloud access to customers and today, we have 100 early applications running on our computers (70 percent of which were built in the cloud),” said CEO Vern Brownell. “Through this work, our customers have told us it takes more than just access to real quantum hardware to benefit from quantum computing. In order to build a true quantum ecosystem, millions of developers need the access and tools to get started with quantum.”

Now their computers are simulating weather patterns and tsunamis, optimizing hotel ad displays, solving complex network problems and, thanks to a new, open-source platform, could help you ride the quantum wave of computer programming.

Inside the box

When I went to visit D-Wave they gave us unprecedented access to the inside of one of their quantum machines. The computers, which are about the size of a garden shed, have a control unit on the front that manages the temperature as well as queuing system to translate and communicate the problems sent in by users.

Inside the machine is a tube that, when fully operational, contains a small chip super-cooled to 0.015 Kelvin, or -459.643 degrees Fahrenheit or -273.135 degrees Celsius. The entire system looks like something out of the Death Star — a cylinder of pure data that the heroes must access by walking through a little door in the side of a jet-black cube.

It’s quite thrilling to see this odd little chip inside its super-cooled home. As the computer revolution maintained its predilection toward room-temperature chips, these odd and unique machines are a connection to an alternate timeline where physics is wrestled into submission in order to do some truly remarkable things.

And now anyone — from kids to PhDs to everyone in-between — can try it.

Into the ocean

Learning to program a quantum computer takes time. Because the processor doesn’t work like a classic universal computer, you have to train the chip to perform simple functions that your own cellphone can do in seconds. However, in some cases, researchers have found the chips can outperform classic computers by 3,600 times. This trade-off — the movement from the known to the unknown — is why D-Wave exposed their product to the world.

“We built Leap to give millions of developers access to quantum computing. We built the first quantum application environment so any software developer interested in quantum computing can start writing and running applications — you don’t need deep quantum knowledge to get started. If you know Python, you can build applications on Leap,” said Brownell.

To get started on the road to quantum computing, D-Wave built the Leap platform. The Leap is an open-source toolkit for developers. When you sign up you receive one minute’s worth of quantum processing unit time which, given that most problems run in milliseconds, is more than enough to begin experimenting. A queue manager lines up your code and runs it in the order received and the answers are spit out almost instantly.

You can code on the QPU with Python or via Jupiter notebooks, and it allows you to connect to the QPU with an API token. After writing your code, you can send commands directly to the QPU and then output the results. The programs are currently pretty esoteric and require a basic knowledge of quantum programming but, it should be remembered, classic computer programming was once daunting to the average user.

I downloaded and ran most of the demonstrations without a hitch. These demonstrations — factoring programs, network generators and the like — essentially turned the concepts of classical programming into quantum questions. Instead of iterating through a list of factors, for example, the quantum computer creates a “parallel universe” of answers and then collapses each one until it finds the right answer. If this sounds odd it’s because it is. The researchers at D-Wave argue all the time about how to imagine a quantum computer’s various processes. One camp sees the physical implementation of a quantum computer to be simply a faster methodology for rendering answers. The other camp, itself aligned with Professor David Deutsch’s ideas presented in The Beginning of Infinity, sees the sheer number of possible permutations a quantum computer can traverse as evidence of parallel universes.

What does the code look like? It’s hard to read without understanding the basics, a fact that D-Wave engineers factored for in offering online documentation. For example, below is most of the factoring code for one of their demo programs, a bit of code that can be reduced to about five lines on a classical computer. However, when this function uses a quantum processor, the entire process takes milliseconds versus minutes or hours.

Classical

# Python Program to find the factors of a number

define a function

def print_factors(x):

This function takes a number and prints the factors

print(“The factors of”,x,”are:”)
for i in range(1, x + 1):
if x % i == 0:
print(i)

change this value for a different result.

num = 320

uncomment the following line to take input from the user

#num = int(input(“Enter a number: “))

print_factors(num)

Quantum

@qpu_ha
def factor(P, use_saved_embedding=True):

####################################################################################################

get circuit

####################################################################################################

construction_start_time = time.time()

validate_input(P, range(2 ** 6))

get constraint satisfaction problem

csp = dbc.factories.multiplication_circuit(3)

get binary quadratic model

bqm = dbc.stitch(csp, min_classical_gap=.1)

we know that multiplication_circuit() has created these variables

p_vars = [‘p0’, ‘p1’, ‘p2’, ‘p3’, ‘p4’, ‘p5’]

convert P from decimal to binary

fixed_variables = dict(zip(reversed(p_vars), “{:06b}”.format(P)))
fixed_variables = {var: int(x) for(var, x) in fixed_variables.items()}

fix product qubits

for var, value in fixed_variables.items():
bqm.fix_variable(var, value)

log.debug(‘bqm construction time: %s’, time.time() – construction_start_time)

####################################################################################################

run problem

####################################################################################################

sample_time = time.time()

get QPU sampler

sampler = DWaveSampler(solver_features=dict(online=True, name=’DW_2000Q.*’))
_, target_edgelist, target_adjacency = sampler.structure

if use_saved_embedding:

load a pre-calculated embedding

from factoring.embedding import embeddings
embedding = embeddings[sampler.solver.id]
else:

get the embedding

embedding = minorminer.find_embedding(bqm.quadratic, target_edgelist)
if bqm and not embedding:
raise ValueError(“no embedding found”)

apply the embedding to the given problem to map it to the sampler

bqm_embedded = dimod.embed_bqm(bqm, embedding, target_adjacency, 3.0)

draw samples from the QPU

kwargs = {}
if ‘num_reads’ in sampler.parameters:
kwargs[‘num_reads’] = 50
if ‘answer_mode’ in sampler.parameters:
kwargs[‘answer_mode’] = ‘histogram’
response = sampler.sample(bqm_embedded, **kwargs)

convert back to the original problem space

response = dimod.unembed_response(response, embedding, source_bqm=bqm)

sampler.client.close()

log.debug(’embedding and sampling time: %s’, time.time() – sample_time)

 

“The industry is at an inflection point and we’ve moved beyond the theoretical, and into the practical era of quantum applications. It’s time to open this up to more smart, curious developers so they can build the first quantum killer app. Leap’s combination of immediate access to live quantum computers, along with tools, resources, and a community, will fuel that,” said Brownell. “For Leap’s future, we see millions of developers using this to share ideas, learn from each other and contribute open-source code. It’s that kind of collaborative developer community that we think will lead us to the first quantum killer app.”

The folks at D-Wave created a number of tutorials as well as a forum where users can learn and ask questions. The entire project is truly the first of its kind and promises unprecedented access to what amounts to the foreseeable future of computing. I’ve seen lots of technology over the years, and nothing quite replicated the strange frisson associated with plugging into a quantum computer. Like the teletype and green-screen terminals used by the early hackers like Bill Gates and Steve Wozniak, D-Wave has opened up a strange new world. How we explore it us up to us.

Perhaps The Ultimate Raspberry Pi Case: Your PC

The content below is taken from the original ( Perhaps The Ultimate Raspberry Pi Case: Your PC), to continue reading please visit the site. Remember to respect the Author & Copyright.

One of the great joys of owning a 3D printer is being able to print custom cases for boards like the Raspberry Pi. What’s more, if you are using a desktop PC, you probably don’t have as many PCI cards in it as you used to. Everything’s moved to the motherboard. [Sneekystick] was using a Pi with a PC and decided the PC itself would make a great Pi case. He designed a bracket and it looks handy.

The bracket just holds the board in place. It doesn’t connect to the PC. The audio, HDMI, and power jacks face out for access. It would be tempting and possible to power the board from the PC supply, but to do that you have to be careful. Connecting the GPIO pins to 5V will work, but bypasses the input protection circuitry. We’ve read that you can find solder points near the USB plug and connect there, but if you do, you should block out the USB port. It might be nice to fill in that hole in the bracket if you planned to do that.

Of course, it isn’t hard to sequester a Pi inside a hard drive bay or some other nook or cranny, but the bracket preserves at least some of the output connectors. If you are really wanting to bury a Pi in a piece of gear, you can always design a custom board and fit a “compute module” in it. These are made to embed which means they have a row of pins instead of any I/O connectors. Of course, that also means more real work if you need any of those connectors.

We’ve seen cases that aim to turn the Pi into a desktop PC before. We’ve also seen those compute modules jammed into Game Boy cases more than once.

Southampton meeting – 9th October

The content below is taken from the original ( Southampton meeting – 9th October), to continue reading please visit the site. Remember to respect the Author & Copyright.

The Southampton RISC OS User Group (SROUG) will next meet up on Tuesday, 9th October. Running from 7:00pm until 9:00pm, the meeting will take place at: Itchen College Sports Centre,Itchen College,Deacon Road,Southampton. There is no entry fee, and anyone with an interest in RISC OS is welcome to attend. There will be at least one […]

A rough guide to your next (or first) fog computing deployment

The content below is taken from the original ( A rough guide to your next (or first) fog computing deployment), to continue reading please visit the site. Remember to respect the Author & Copyright.

Like any kind of large-scale computing system deployment ever, the short answer to the question “what should my fog compute deployment look like” is going to be “it varies.” But since that’s not a particularly useful piece of information, Cisco principal engineer and systems architect Chuck Byers gave an overview on Wednesday at the 2018 Fog World Congress of the many variables, both technical and organizational, that go into the design, care and feeding of a fog computing setup.

Byers offered both general tips about the architecture of fog computing systems, as well as slightly deeper dives into the specific areas that all fog computing deployments will have to address, including different types of hardware, networking protocols, and security.

To read this article in full, please click here

Disney’s spray-painting drone could end the need for scaffolding

The content below is taken from the original ( Disney’s spray-painting drone could end the need for scaffolding), to continue reading please visit the site. Remember to respect the Author & Copyright.

We’ve seen some pretty interesting work come out of Disney Research in the past, like techniques for digitally recreating teeth, makeup-projecting lamps, a group AR experience and a stick-like robot that can perform backflips. One of its latest proje…

In a quest to centralize all your media, Plex now includes web series

The content below is taken from the original ( In a quest to centralize all your media, Plex now includes web series), to continue reading please visit the site. Remember to respect the Author & Copyright.

At CES, media software maker Plex said it would this year add support for podcasts, web series and other digital media to its platform. It then rolled out podcasts this May, and now it’s introducing its own curated collection of web series. The company today is launching Plex Web Shows in beta, which will offer unlimited, on-demand streaming of online shows from brands like GQ, Saveur, Epicurious, Pitchfork, Condé Nast, The New Yorker, Fandor, Vanity Fair, and others.

The shows will span a range of interests, including food, home and garden, science, technology, entertainment and pop culture, says Plex. In addition shows from the big-name brand partners, which also include Bonnier, TWiT, Ovation and more, there will also be a handful of shows from indie creators, like Epic Meal Time, ASAPscience, Household Hacker, People are Awesome, and The Pet Collective. 

Plex tells us that there will be over 19,000 episodes across 67 shows at launch, with more on the way.

Some partners and Plex have revenue share agreements in place, the company also says, based on ad sales that Plex manages. The details are undisclosed.

Plex got its start as software for organizing users’ home media collections of video, music, and photos, but has in recent months been shifting its focus to address the needs of cord cutters instead. It launched tools for watching live TV through an antenna, and recording shows and movies to a DVR.

It’s also recently said it’s shutting down other features, like support for streaming content from Plex Cloud as well as Plex’s directory of plugins, in order to better focus on its new ambitions.

Along the way, Plex has also rolled out other features designed for media consumption including not only podcast, but also the addition of a news hub within its app, thanks its acquisition of streaming news startup, Watchup.

With the launch of Web Shows, Plex is again finding a way to give its users something to watch without having to make the sort of difficult content deals that other live TV streaming services today do – like Sling TV, PlayStation Vue, or YouTube TV, for example.

“We really care about each user’s personal media experience, and want to be ‘the’ platform for the media that matters most to them,” Plex CEO Keith Valory tells TechCrunch. “This started with helping people with their own personal media libraries, and then we added over-the-air television, news, podcasts, and now web shows. Sources for quality digital content continues to explode, but the user experience for discovering and accessing it all has never been worse. It’s chaos,” he continues.

“This is the problem we are solving. We’re working hard to make tons of great content available in one beautiful app, and giving our users the tools to customize their own experience to include only the content that matters to them,” Valory adds.

Access to Web Shows is available across devices through the Plex app, where there’s now a new icon for “Web Shows.” From here, you’ll see the familiar “On Deck” section – those you’re following – as well as personalized recommendations, trending episodes, and links to view all the available web shows and a list of you’re subscribed to.

You can also browse shows by category – like “Arts & Entertainment,” “Computers & Electronics,” “Science,” etc. Or find those that were “Recently Played” or are “New” to Plex.

Each episode will display information like the show’s length, synopsis, publish date and title, and will let you play, pause, mark as watched/unwatched, and add to your queue.

The launch of Web Shows is another step Plex is making towards its new goal of becoming a platform for all your media – not just your home collection, but everything that streams, too – like podcast, web series and TV. (All it’s missing now is a Roku-like interface for jumping into your favorite on-demand streaming apps. That’s been on its long-term roadmap, however.)

Web Shows will be free to all of Plex’s now 19 million registered users, not just Plex Pass subscribers. The feature is live on iOS, Android, Android TV, and Apple TV.

Update This Classic Children’s Toy With a Raspberry Pi

The content below is taken from the original ( Update This Classic Children’s Toy With a Raspberry Pi), to continue reading please visit the site. Remember to respect the Author & Copyright.

Everyone under the age of 60 remembers the frustration of trying to generate a work of art on an Etch A Sketch. The mechanical drawing toy, introduced in 1960 by the Ohio Art […]

The post Update This Classic Children’s Toy With a Raspberry Pi appeared first on Geek.com.

Microsoft Ignite – New Windows 10 Features Coming to Intune

The content below is taken from the original ( Microsoft Ignite – New Windows 10 Features Coming to Intune), to continue reading please visit the site. Remember to respect the Author & Copyright.

Intune plays an important part in Microsoft’s modern desktop strategy, allowing organizations to deploy and manage Windows 10 without an on-premises Active Directory domain. Microsoft announced several new features that will make it easier to manage Windows 10 using Intune.

Deploy Win32 Apps with Intune

Currently in preview, Microsoft announced the ability to deploy ‘most’ legacy Win32 apps using the Intune Management Extension, including MSI, setup.exe, and MSP files. System administrators will also be able to use Intune to remove these apps. Intune already had the ability to install line-of-business (LOB) and Microsoft Store apps but this new capability will enable businesses to manage more legacy business apps using Intune. LOB applications are those that rely on a single MSI file with no external dependencies.

Microsoft says that this new feature was built by the same team that created the Windows app deployment capabilities in System Center Configuration Manager (SCCM) and that Intune will be able to evaluate requirement rules before an app starts to download and install, notifying users via the Action Center of install status and if a reboot is required. Legacy Win32 apps are packaged using the Intune Win32 application packaging tool, which converts installers into .intunewin files.

Security Baselines

Microsoft publishes security baselines for supported client and server versions of Windows as part of the Security Compliance Toolkit (SCT), which replaced the Security Compliance Manager. But the baselines are provided as Group Policy Object backups, which can’t be used with Intune because it relies on Mobile Device Management (MDM) rather than Group Policy.

For more information on SCT, see Microsoft Launches the Security Compliance Toolkit 1.0 on Petri.

To help organizations meet security requirements when using Intune to manage Windows 10, Microsoft will be making security baselines available in the Intune portal over the next couple of weeks. The baselines will be updated and maintained in the cloud and have been developed in coordination with the same team that develops the Group Policy security baselines.

Organizations will be able to deploy the baselines as provided or modify them to suit their own needs. But the best news is that Intune will validate whether devices are compliant and report if any devices aren’t meeting the required standards.

Third-Party Certification Authority Support

Finally, Microsoft announced that third-party certification authority (CA) support is coming to Intune. Third-party CAs, including Entrust Datacard and GlobalSign, have already signed up to deliver certificates to mobile devices running Windows, iOS, Android, and macOS using the Simple Certificate Enrollment Protocol (SCEP).

Microsoft is planning to add many new features to Intune, including a public preview for Android Enterprise fully managed devices, machine risk-based conditional access with threat protection for Microsoft 365 users, and deeper integration with Outlook mobile controls. For a complete list of the new features and improvements coming to Intune, Configuration Manager, and Microsoft 365, check out Microsoft’s blog post here.

The post Microsoft Ignite – New Windows 10 Features Coming to Intune appeared first on Petri.

HPE Simplifies Hybrid Cloud Data Protection With New Solutions for HPE Nimble Storage and HPE 3PAR

The content below is taken from the original ( HPE Simplifies Hybrid Cloud Data Protection With New Solutions for HPE Nimble Storage and HPE 3PAR), to continue reading please visit the site. Remember to respect the Author & Copyright.

Hewlett Packard Enterprise (HPE) today announced new hybrid cloud data protection and copy data management solutions for its intelligent storage… Read more at VMblog.com.

The drywall-installing robot you’ve always dreamed about is finally here

The content below is taken from the original ( The drywall-installing robot you’ve always dreamed about is finally here), to continue reading please visit the site. Remember to respect the Author & Copyright.

The HRP-5P is a humanoid robot from Japan’s Advanced Industrial Science and Technology institute that can perform common construction tasks including — as we see above — install drywall.

HRP-5P — maybe we can call it Herb? — uses environmental measurement, object detection and motion planning to perform various tasks.

Ever had to install large sections of drywall and wondered if there wasn’t a machine available that could do that for you while you take care of a bowl of the nachos? Well, now there is: Japanese researchers have developed a humanoid worker robot, HRP-5P, which appears to be capable of performing the backbreaking task over and over again without breaking into sweat.

Sure, the robot still needs to pick up the pace a bit to meet construction deadlines, but it’s a start, and the machine could—maybe, one day—become a helpful tool in Japan’s rapidly aging society where skilled workers become increasingly rare.

Ooof, these are heavy!

Let’s see…this one goes here I guess.

Done and done! Now let’s see what my buddies from Skynet SkyNet are up to this afternoon.

All images via 産総研広報’s video on YouTube.

How to Optimize Amazon S3 Performance

The content below is taken from the original ( How to Optimize Amazon S3 Performance), to continue reading please visit the site. Remember to respect the Author & Copyright.

Amazon S3 is the most common storage options for many organizations, being object storage it is used for a wide variety of data types, from the smallest objects to huge datasets. All in all, Amazon S3 is a great service to store a wide scope of data types in a highly available and resilient environment. Your S3 objects are likely being read and accessed by your applications, other AWS services, and end users, but is it optimized for its best performance? This post will discuss some of the mechanisms and techniques that you can apply to ensure you are getting the most optimal performance when using Amazon S3.

How to optimize Amazon S3 performance: Four best practices

 

Four best practices when working with S3

1. TCP Window Scaling

This is a method used which enables you to enhance your network throughput performance by modifying the header within the TCP packet using a window scale which allows you to send data in a single segment larger than the default 64KB. This isn’t something specific that you can only do with Amazon S3, this is something that operates at the protocol level and so you can perform window scaling on your client when connecting to any other server using this protocol. More information on this can be found in RFC-1323

When TCP establishes a connection between a source and destination, a 3-way handshake takes place which originates from the source (client). So logically when looking at this from an S3 perspective, your client might need to upload an object to S3. Before this can happen a connection to the S3 servers needs to be created. The client will send a TCP packet with a specified TCP window scale factor in the header, this initial TCP request is known as a SYN request, part 1 of the 3-way handshake. S3 will receive this request and respond with a SYN/ACK message back to the client with it’s supported window scale factor, this is part 2. Part 3 then involved an ACK message back to the S3 server acknowledging the response. On completion of this 3 way handshake, a connection is then established and data can be sent between the client and S3.

By increasing the window size with a scale factor (window scaling) it allows you to send larger quantities of data in a single segment and therefore allowing you to send more data at a quicker rate.

Window Scaling

2. TCP Selective Acknowledgement (SACK)

Sometimes multiple packets can be lost when using TCP and understanding which packets have been lost can be difficult to ascertain within a TCP window. As a result, sometimes all of the packets can be resent, but some of these packets may have already been received by the receiver and so this is ineffective. By using TCP selective acknowledgment (SACK), it helps performance by notifying the sender of only failed packets within that window allowing the sender to simple resend only failed packets.

Again, the request for using SACK has to be initiated by the sender (the source client) within the connection establishment during the SYN phase of the handshake. This option is known as SACK-permitted. More information on how to use and implement SACK can be found within RFC-2018.

3. Scaling S3 Request Rates

On top of TCP Scaling and TCP SACK communications, S3 itself is already highly optimized for a very high request throughput. In July 2018, AWS made a significant change to these request rates as per the following AWS S3 announcement. Prior to this announcement, AWS recommended that you randomized prefixes within your bucket to help optimize performance, this is no longer required. You can now achieve exponential growth of request rate performance by using multiple prefixes within your bucket.

You are now able to achieve 3,500 PUT/POST/DELETE request per second along with 5,500 GET requests. These limitations are based on a single prefix, however, there are no limitations of the number of prefixes that can be used within an S3 bucket. As a result, if you had 20 prefixes you could reach 70,000 PUT/POST/DELETE and 110,000 GET requests per second within the same bucket.

S3 storage operates across a flat structure meaning that there is no hierarchical level of folder structures, you simply have a bucket and ALL objects are stored in a flat address space within that bucket. You are able to create folders and store objects within that folder, but these are not hierarchical, they are simply prefixes to the object which help make the object unique. For example, if you have the following 3 data objects within a single bucket:
Presentation/Meeting.ppt
Project/Plan.pdf
Stuart.jpg

The ‘Presentation’ folder acts as a prefix to identify the object and this pathname is known as the object key. Similarly with the ‘Project’ folder, again this is the prefix to the object. ‘Stuart.jpg’ does not have a prefix and so can be found within the root of the bucket itself.

Learn how to create your first Amazon S3 bucket in this Hands-on Lab.

4. Integration of Amazon CloudFront

Another method used to help optimization, which is by design, is to incorporate Amazon S3 with Amazon CloudFront. This works particularly well if the main request to your S3 data is a GET request. Amazon CloudFront is AWS’s content delivery network that speeds up the distribution of your static and dynamic content through its worldwide network of edge locations.

Normally when a user requests content from S3 (GET request), the request is routed to the S3 service and corresponding servers to return that content. However, if you’re using CloudFront in front of S3 then CloudFront can cache commonly requested objects. Therefore the GET request from the user is then routed to the closest edge location which provides the lowest latency to deliver the best performance and return the cached object. This also helps to reduce your AWS S3 costs by reducing the number of GET requests to your buckets.]

This post has explained a number of different options that are available to help you identify ways to optimize the performance when working with S3 objects.

For further information on some of the topics mentioned in this post please take a look at our library content.

The post How to Optimize Amazon S3 Performance appeared first on Cloud Academy.

Adding custom intelligence to Gmail with serverless on GCP

The content below is taken from the original ( Adding custom intelligence to Gmail with serverless on GCP), to continue reading please visit the site. Remember to respect the Author & Copyright.

If you are using G Suite at work, you probably have to keep track of tons of data spread across Gmail, Drive, Docs, Sheets and Calendar. If only there was a simple, but scalable way to tap this data and have it nudge you based on signals of your choice. In this blog post, we’ll show you how to build powerful Gmail extensions using G Suite’s REST APIs, Cloud Functions and other fully managed services on Google Cloud Platform (GCP).

gsute_cloudfunction.png

There are many interesting use cases for GCP and G Suite. For example, you could mirror your Google Drive files into Cloud Storage and run it through the Cloud Data Loss Prevention API. You could train your custom machine learning model with Cloud AutoML. Or you might want to export your Sheets data into Google BigQuery to merge it with other datasets and run analytics at scale. In this post, we’ll use Cloud Functions to specifically talk to Gmail via its REST APIs and extend it with various GCP services. Since email remains at the heart of how most companies operate today, it’s a good place to start and demonstrate the potential of these services working in concert.

Architecture of a custom Gmail extension

High email volumes can be hard to manage. Many email users have some sort of system in place, whether it’s embracing the “inbox zero,” setting up an elaborate system of folders and flags, or simply flushing that inbox and declaring “email bankruptcy” once in a while.

Some of us take it one step further and ask senders to help us prioritize our emails: consider an auto-response like “I am away from my desk, please resend with URGENT123 if you need me to get back to you right away.” In the same vein, you might think about prioritizing incoming email messages from professional networks such as LinkedIn by leaving a note inside your profile such as “I pay special attention to emails with pictures of birds.” That way, you can auto-prioritize emails from senders who (ostensibly) read your entire profile.

gmail_star.png
The email with a picture of an eagle is starred, but the one with a picture of a ferret is not.

Our sample app does exactly this, and we’re going to fully describe what this app is, how we built it, and walk through some code snippets.

Here is the architectural diagram of our sample app:

architectural_diagram.png

There are three basic steps to building our Gmail extension:

Gmail_extension.png

Without further ado, let’s dive in!

How we built our app

Building an intelligent Gmail filter involves three major steps, which we’ll walk through in detail below. Note that these steps are not specific to Gmail—they can be applied to all kinds of different G Suite-based apps.

Step 1. Authorize access to G Suite data

The first step is establishing initial authentication between G Suite and GCP. This is a universal step that applies to all G Suite products, including Docs, Slides, and Gmail.

GSuite_data.png

In order to authorize access to G Suite data (without storing the user’s password), we need to get an authorization token from Google servers. To do this, we can use an HTTP function to generate a consent form URL using the OAuth2Client:

This function redirects a user to a generated URL that presents a form to the user. That form then redirects to another “callback” URL of our choosing (in this case, oauth2callback) with an authorization code once the user provides consent. We save the auth token to Cloud Datastore, Google Cloud’s NoSQL database. We’ll use that token to fetch email on the user’s behalf later. Storing the token outside of the user’s browser is necessary because Gmail can publish message updates to a Cloud Pub/Sub topic, which triggers functions such as onNewMessage. (For those of you who aren’t familiar with Cloud Pub/Sub, it’s a GCP distributed notification and messaging system that guarantees at-least-once delivery.)

Let’s take a look at the oauth2callback function:

This code snippet uses the following helper functions

  • getEmailAddress: gets a user’s email address from an OAuth 2 token

  • fetchToken: fetches OAuth 2 tokens from Cloud Datastore (and auto-refreshes them if they are expired)

  • saveToken: saves OAuth 2 tokens to Cloud Datastore

Now, let’s move on to subscribing to Gmail changes by initializing a Gmail watch.

Step 2: Initialize a ‘watch’ for Gmail changes

watch_gmail_changes.png

Gmail provides the watch mechanism for publishing new email notifications to a Cloud Pub/Sub topic. When a user receives a new email, a message will be published to the specified Pub/Sub topic. We can use this message to invoke a Pub/Sub-triggered Cloud Function that processes the incoming message.

In order to start listening to incoming messages, we must use the OAuth 2 token that we obtained in Step 1 to first initialize a Gmail watch on a specific email address,  as shown below. One important thing to note is that the G Suite API client libraries don’t support promises at the time of writing, so we use pify to handle the conversion of method signatures.

Now that we have initialized the Gmail watch, there is an important caveat to consider: the Gmail watch only lasts for seven days and must be re-initialized by sending an HTTP request containing the target email address to initWatch. This can be done either manually or via a scheduled job. For brevity, we used a manual refresh system in this example, but an automated system may be more suitable for production environments. Users can initialize this implementation of Gmail watch functionality by visiting the oauth2init function in their browser. Cloud Functions will automatically redirect them (first to oauth2callback, and then initWatch) as necessary.

We’re ready to take action on the emails that this watch surfaces!  

Step 3: Process and take action on emails

Now, let’s talk about how to process incoming emails. The basic workflow is as follows:

take_action.png

As discussed earlier, Gmail watches publish notifications to a Cloud Pub/Sub topic whenever new email messages arrive. Once those notifications arrive, we can fetch the new message IDs using the gmail.users.messages.list, and their contents using the gmail.users.messages.get API call. Since we’re only processing email content once, we don’t need to store any data externally.

Once our function extracts the images within an email, we can use the Cloud Vision API to analyze these images and check if they contain a specific object (such as birds or food). The API returns a list of object labels (as strings) describing the provided image. If that object label list contains bird, we can use the gmail.users.messages.modify API call to mark the message as “Starred”.

This code snippet uses the following helper functions

  • listMessageIds: fetches the most recent message IDs using gmail.users.messages.list

  • getMessageById: gets the most recent message given its ID using gmail.users.messages.get

  • getLabelsForImages: detects object labels in each image using the Cloud Vision API

  • labelMessage: adds a Gmail label to the message using gmail.users.messages.modify

Please note that we’ve abstracted most of the boilerplate code into the helper functions. If you want to take a deeper dive, the full code can be seen here along with the deployment instructions.

Wrangle your G Suite data with Cloud Functions and GCP

To recap, in this tutorial we built a scalable application that processes new email messages as they arrive to a user’s Gmail inbox and flags them as important if they contain a picture of a bird. This is of course only one example. Cloud Functions makes it easy to extend Gmail and other G Suite products with GCP’s powerful data analytics and machine learning capabilities—without worrying about servers or backend implementation issues. With only a few lines of code, we built an application that automatically scales up and down with the volume of email messages in your users’ accounts, and you will only pay when your code is running. (Hint: for small-volume applications, it will likely be very cheap—or even free—thanks to Google Cloud Platform’s generous Free Tier.)

To learn more about how to programmatically augment your G Suite environment with GCP, check out our Google Cloud Next ‘18 session G Suite Plus GCP: Building Serverless Applications with All of Google Cloud, for which we developed this demo. You can watch that entire talk here:

G Suite Plus GCP: Building Serverless Applications with All of Google Cloud (Cloud Next '18)

You can find the source code for this project here.

Happy building!

Azure Advisor has new recommendations for you

The content below is taken from the original ( Azure Advisor has new recommendations for you), to continue reading please visit the site. Remember to respect the Author & Copyright.

Azure Advisor is your free, personalized guide to Azure best practices. It analyzes your Azure usage and configurations and helps you optimize your resources for high availability, security, performance, and cost. We’re constantly adding more to Advisor and are excited to share a bundle of new recommendations and integrations so you can get more out of Azure.

blog-image-1 (1)

Create or update table statistics in your SQL Data Warehouse tables

Table statistics are important for ensuring optimal query performance. The SQL Data Warehouse query optimizer uses up-to-date statistics to estimate the cardinality or number of rows in the query result, which generates a higher-quality query plan for faster performance.

Advisor now has recommendations to help you boost your SQL Data Warehouse query performance. It will identify tables with outdated or missing table statistics and recommend that you create or update them.

Remove data skew on your SQL Data Warehouse table

Data skew occurs when one distribution has more data than others and can cause unnecessary data movement or resource bottlenecks when running your workload, slowing your performance. Advisor will detect distribution data skew greater than 15 percent and recommend that you redistribute your data, and revisit your table distribution key selections.

Enable soft delete on your Azure Storage blobs

Enable soft delete on your storage account so that deleted Azure Storage blobs transition to a soft deleted state instead of being permanently deleted. When data is overwritten, a soft deleted snapshot is generated to save the state of the overwritten data. This allows you to recover in the event of accidental deletion or overwrites. Advisor now identifies Azure Storage accounts that don’t have soft delete enabled and suggests you enable it.

Migrate your Azure Storage account to Azure Resource Manager

Azure Resource Manager (ARM) is the most up-to-date way to manage Azure resources, with template deployments, additional security options, and the ability to upgrade to a GPv2 account for utilization of Azure Storage’s latest features. Azure will identify any stand-alone Storage accounts that are using the classic deployment model and recommend migrating to the ARM deployment model.

Create additional Azure ExpressRoute circuits for customers using Microsoft Peering for Office 365

Customers using Microsoft Peering for Office 365 should have at least two ExpressRoute circuits at different locations to avoid having a single point of failure. Advisor will identify when there is only one ExpressRoute circuit and recommend creating another.

Azure Advisor is now integrated into the Azure Virtual Machines (VMs) experience

When you are viewing your VM resources, you will now see a notification if you have Azure Advisor recommendations that are related to that resource. There will be a blue notification at the top of the experience that indicates the number of Advisor recommendations you have and the description of one of those recommendations. Clicking on the notification will take you to the full Advisor experience where you can see all the recommendations for that resource.

blog-image-2

blog-image-3

Azure Advisor recommendations are available in Azure Cost Management

Azure Advisor recommendations are now integrated in the new Azure Cost Management experience that is in public preview for Enterprise Agreement (EA) enrollments. Clicking on Advisor recommendations on the left menu will open Advisor to the cost tab. Integrating Advisor with Azure Cost Management creates a single location for cost recommendations. This allows you to have the same experience whether you are coming from Azure Cost Management or looking at cost recommendations directly from Azure Advisor.

blog-image-4

Review your Azure Advisor recommendations

Learn more about Azure Advisor and review your Advisor recommendations in the Azure portal today to start optimizing your Azure resources for high availability, security, performance, and cost. For help getting started, visit the Advisor documentation.

The Google graveyard: Remembering three dead search engines

The content below is taken from the original ( The Google graveyard: Remembering three dead search engines), to continue reading please visit the site. Remember to respect the Author & Copyright.

Buffy the Vampire Slayer was the first show on American television to use the word "Google" as a transitive verb. It was 2002, in the fourth episode of the show's seventh and final season. Buffy, Willow, Xander and the gang are trying to help Cassie,…

Announcing private preview of Azure VM Image Builder

The content below is taken from the original ( Announcing private preview of Azure VM Image Builder), to continue reading please visit the site. Remember to respect the Author & Copyright.

Today I am excited to announce the private preview of Azure VM Image Builder, a service which allows users to have an image building pipeline in Azure. Creating standardized virtual machine (VM) images allow organizations to migrate to the cloud and ensure consistency in the deployments. Users commonly want VMs to include predefined security and configuration settings as well as application software they own. However, setting up your own image build pipeline would require infrastructure and setup. With Azure VM Image Builder, you can take an ISO or Azure Marketplace image and start creating your own golden images in a few steps.

How it works

Azure VM Image Builder lets you start with either a Linux-based Azure Marketplace VM or Red Hat Enterprise Linux (RHEL) ISO and begin to add your own customizations. Your customizations can be added in the form of a shell script, and because the VM Image Builder is built on HashiCorp Packer, you can also import your existing Packer shell provisioner scripts. As the last step, you specify where you would like your images hosted, either in the Azure Shared Image Gallery or as an Azure Managed Image. See below for a quick video on how to create a custom image using the VM Image Builder.

part1v2

For the private preview, we are supporting these key features:

  • Migrating an existing image customization pipeline to Azure. Import your existing shell scripts or Packer shell provisioner scripts.
  • Migrating your Red Hat subscription to Azure using Red Hat Cloud Access. Automatically create Red Hat Enterprise Linux VMs with your eligible, unused Red Hat subscriptions.
  • Integration with Azure Shared Image Gallery for image management and distribution
  • Integration with existing CI/CD pipeline. Simplify image customization as an integral part of your application build and release process as shown here:

part2v2

If you are attending Microsoft Ignite, feel free to join us at breakout session BRK3193 to learn more about this service.

Frequently asked questions

Will Azure VM Image Builder support Windows?

For private preview, we will support Azure Marketplace Linux images (specifically Ubuntu 16.04 and 18.04). Support for Windows VM is on our roadmap.

Can I integrate Azure VM Image Builder into my existing image build pipeline?

You can call the VM Image Builder API from your existing tooling.

Is VM Image Builder essentially Packer as a Service?

The VM Image Builder API shares similar style to Packer manifests, and is optimized to support building images for Azure, supporting Packer shell provisioner scripts.

Do you support image lifecycle management in the preview?

For private preview, we will only support creation of images, but not ongoing updates. The ability to update an existing custom image is on our roadmap.

How much does VM Image Builder cost?

For private preview, Azure VM Image Builder is free. Azure Storage used by images are billed at standard pricing rates.

Sign up today for the private preview

I hope you sign up for the private preview and give us feedback. Register and we will begin sending out more information in October.

Azure Active Directory authentication for Azure Files SMB access now in public preview

The content below is taken from the original ( Azure Active Directory authentication for Azure Files SMB access now in public preview), to continue reading please visit the site. Remember to respect the Author & Copyright.

We are excited to announce the preview of Azure Active Directory authentication for Azure Files SMB access leveraging Azure AD Domain Services (AAD DS). Azure Files offers fully managed file shares in the cloud that are accessible via the industry standard SMB protocol. Integration with AAD enables SMB access to Azure file shares using AAD credentials from AAD DS domain joined Windows VMs. In addition Azure Files supports preserving, inheriting, and enforcing Microsoft file system NTFS ACLs on all folders and files in a file share.

With this capability, we can extend the traditional identity-based share access experience that you are most familiar with to Azure Files. For lift and shift scenarios, you can sync on-premises AD to AAD, migrate existing files with ACLs to Azure Files, and enable your organization to access file shares with the same credentials with no impact to the business.

In addition to this, we have enhanced our access control story by enforcing granular permission assignment on the share, folder, and file levels. You can use Azure Files as the storage solution for project collaboration, leveraging folder or file level ACLs to protect your organization’s sensitive data.

Previously, when you imported files to Azure file shares, only the data was preserved, not the ACLs. If you used Azure Files as a cloud backup, all access assignments would be lost when you restored your existing file shares from Azure Files. Now, Azure Files can preserve your ACLs along with your data, providing you a consistent storage experience.

image

Here are the key capabilities introduced in this preview:

  • Support share Level permission assignment using Role Based Access Control (RBAC).

Similar to the traditional Windows file sharing schema, you can give authorized users share level permissions to access your Azure File Share.

  • Enforce NTFS folder and file level permission.

Azure Files enforces standard NTFS file permission on the folder and file level, including the root directory. You can simply use the icacls or Powershell command tool to set or change permissions over mounted file shares.

  • Continue to support storage account key for Super User experience.

Mounting Azure file shares using the storage account key will continue to be supported for Super User scenario. It will surpass all access control restrictions configured on share, folder, or file level.

  • Preserve NTFS ACLs for data import to Azure Files.

We support preserving NTFS ACLs for data import to Azure Files over SMB. You can copy the ACLs on your directory/file simply with robocopy command.

    Getting started

    You can read more about the benefits of Azure Files AAD Authentication and follow this step by step guidance to get started. We support Azure Files AAD Integration Public Preview in a couple of in selected regions. Also refer to Azure Preview guidelines for general information on using preview features.

    Feedback

    We look forward to hearing your feedback on this feature, please email us at [email protected].