Tiny Websites have no Server

The content below is taken from the original ( Tiny Websites have no Server), to continue reading please visit the site. Remember to respect the Author & Copyright.

A big trend in web services right now is the so-called serverless computing, such as Amazon’s Lambda service. The idea is you don’t have a dedicated server waiting for requests for a specific purpose. Instead, you have one server (such as Amazon’s) listening for lots of requests and on demand, you spin up an environment to process that request. Conceptually, it lets you run a bit of Javascript or some other language “in the cloud” with no dedicated server. A new concept — https://itty.bitty.site — takes this one step farther. The site creates self-contained websites where the content is encoded in the URL itself.

Probably the best example is to simply go to the site and click on “About itty bitty.” That page is itself encoded in its own URL. If you then click on the App link, you’ll see a calculator, showing that this isn’t just for snippets of text. While this does depend on the itty.bitty.site web host to provide the decoding framework, the decoding is done totally in your browser and the code is open source. What that means is you could host it on your own server, if you wanted to.

At first, this seems like a novelty until you start thinking about it. A small computer with an Internet connection could easily formulate these URLs to create web pages. A bigger computer could even host the itty.bitty server. Then there’s the privacy issue. At first, we were thinking that a page like this would be hard to censor since there is no centralized server with the content. But you still need the decoding framework. However, that wouldn’t stop a sophisticated user from “redirecting” to another — maybe private — decoding website and reading the page regardless of anyone’s disapproval of the content.

That might be the most compelling case of all. You can encode something in a URL and then anyone with that URL could read your content even if someone shuts down your servers (or the itty bitty servers). The itty bitty server just hands out some generic JavaScript. The website data is stored as a fragment which — interestingly enough — doesn’t get sent to the server.

That means the server doesn’t even get a look at what you are trying to decode. It just provides the decoding framework and your browser does all the rest of the work locally. We’d love to see someone fork the project and add simple encryption, too. Currently, the text is compressed and base 64 encoded, but anyone with the URL can decode what it says. An encryption key would allow you to send URLs in the clear that only some people could decode and would be very hard to suppress.

The itty bitty code itself is an app since you can edit most pages with an edit link at the top right corner. If you don’t like editing in place, the site explains how you can use a generic HTML file or use an online HTML editor, if you prefer.

There are limitations. You probably can’t host graphics internally — you’d need an external place to point to pictures. You also can make really long URLs — which means some services like Twitter will cut them off. We figure you could use a URL shortener if you needed to. There’s also a way to make a QR code baked right in.

We could see this replacing a server on a Raspberry Pi project. While this isn’t technically serverless computing, it did remind us of how to write code for assistants.

Backup vs. archive: Why it’s important to know the difference

The content below is taken from the original ( Backup vs. archive: Why it’s important to know the difference), to continue reading please visit the site. Remember to respect the Author & Copyright.

If you want to make a backup person apoplectic, call an old backup an archive.

It’s just shy of saying that data on a RAID array doesn’t need to be backed up. The good news is that the differences between backup and archive are quite stark and easy to understand.

What is backup?

Backup is a copy of data created to restore said data in case of damage or loss. The original data is not deleted after a backup is made.

To read this article in full, please click here

Predict your future costs with Google Cloud Billing cost forecast

The content below is taken from the original ( Predict your future costs with Google Cloud Billing cost forecast), to continue reading please visit the site. Remember to respect the Author & Copyright.

With every new feature we introduce to Google Cloud Billing, we strive to provide your business with greater flexibility, control, and clarity so that you can better align your strategic priorities with your cloud usage. In order to do so, it’s important to be able to answer key questions about your cloud costs, such as:

  • “How is my current month’s Google Cloud Platform (GCP) spending trending?”
  • “How much am I forecasted to spend this month based on historical trends?”
  • “Which GCP product or project is forecasted to cost me the most this month?”

Today, we are excited to announce the availability of a new cost forecast feature for Google Cloud Billing. This feature makes it easier to see at a glance how your costs are trending and how much you are projected to spend. You can now forecast your end-of-month costs for whatever bucket of spend is important to you, from your entire billing account down to a single SKU in a single project.

View your current and forecasted costs

Get started

Cost forecast for Google Cloud Billing is now available to all accounts. Get started by navigating to your account’s billing page in the GCP console and opening the reports tab in the left-hand navigation bar.

You can learn more about the cost forecast feature in the billing reports documentation. Also, if you’re attending Google Cloud Next ‘18, check out our session on Monitoring and Forecasting Your GCP Costs.

Related content

The Difference Between Ripple and XRP

The content below is taken from the original ( The Difference Between Ripple and XRP), to continue reading please visit the site. Remember to respect the Author & Copyright.

To help clarify how Ripple, the technology company, and XRP, the independent digital asset, are distinctly different, we’ve outlined in a simple infographic the most frequently asked questions related to the two:

  • What is it?
  • How is one related to each other?
  • Who controls whether it succeeds or fails?
  • Who uses it?
  • Who owns it?

For more information about Ripple, visit our website.

The post The Difference Between Ripple and XRP appeared first on Ripple.

Happy 10th Birthday Hyper-V!

The content below is taken from the original ( Happy 10th Birthday Hyper-V!), to continue reading please visit the site. Remember to respect the Author & Copyright.

In this post, I will look back on the history of Hyper-V, and look to the future of Microsoft’s virtualization.

Happy 10th Birthday Hyper-V [Image Credit: Jeff Woolsey, @WSV_GUY]

Happy 10th Birthday Hyper-V [Image Credit: Jeff Woolsey, @WSV_GUY]

It All Started With An Acquisition

On February 19, 2003, Microsoft announced that the corporation has acquired a privately held virtualization vendor called Connectix. It might not have seemed like it then, but this acquisition was the genesis of something huge in the IT world, and I’m not limiting that to just Hyper-V.

Two Connectix products, Virtual PC and Virtual Server, were brought into the Microsoft portfolio. After brief betas, both products were made available … for purchase. At the time, I was running the Microsoft infrastructure for an international finance company. I found myself needing more capacity for underutilized hardware, and virtualization made sense. My test lab, a duplicate of core production systems, ran on the beta of Virtual Server 2005 and I used Virtual PC on our desktops for smaller scale work such as application distribution. On day 1 of general availability, I contacted our licensing provider and bought 2 copies of Virtual Server and 3 copies of Virtual PC – one of those Virtual Server copies went into production and we started a re-deployment of physical machines to a single host.

Microsoft went on to make Virtual Server and Virtual PC free products. Virtual Server 2005 R2 was released and Virtual Server 2005 R2 SP1 followed. These products filled a gap for Microsoft. Meanwhile, a small company called VMware was busy becoming a big company.  Publicly, Microsoft pushed Virtual Server. In the background, they were working on “codename Viridian” which would become Windows Server 2008 Hyper-V.

The above photo by Jeff Woolsey, Principal Program Manager, Windows Server/Hybrid Cloud, was taken at the product team party. It mentions some other code names that I am unfamiliar with: vCode and Vertiflex. If you know them, please share in the comments below!

Windows Server 2008

The beta and launch of Windows Server 2008 was the first time I had a “non-customer” relationship with Microsoft. I was working with a hosting company, building a “next generation” platform on blade servers, SAN, and VMware software. I was invited to attend a series of boot camps and found myself engaging with the Microsoft IT pro evangelists. We were focused on VMware at work, but Hyper-V caught my interest. I was a huge fan of System Center back then, thinking that if I ever built a green/grey field infrastructure again, System Center Operations Manager would be the third thing I would build after domain controllers and WSUS. I looked at Hyper-V and wondered about the integrations that Microsoft could bring like no one else, but it was a tech I would not use.

Things changed and I found myself with a start-up company and Hyper-V was our choice of hypervisor. This was the first time Microsoft would release an enterprise-class hypervisor. Even in the beta releases, it performed well. I built a physical infrastructure on the beta, awaiting the GA release, which came after the release of WS2008. I can remember the sunny evening, pressing F5, waiting for the download link, which I promptly put into production that night.

The performance lived up to expectations, but this was very much a v1.0 release. Some improvement was needed. I can remember the first time I spoke at an event about Hyper-V. The room was full of “vFanboys” and I left the room black-and-blue about the lack of Live Migration (vMotion) or Dynamic Memory in Hyper-V.

Typical of Microsoft back then, the release of WS2008 wasn’t even out the door before the planning for WS2008 R2 had started.

Windows Server R2

Live Migration. That’s the one thing I remember most about this release. Finally, we had the ability to move a virtual machine from one clustered host to another with no apparent downtime. I know that there were lots more features, but this was the Live Migration release.

This was when small/medium businesses started to discover virtualization – Microsoft effectively made it free if your virtual machines were licensed for the current version of Windows Server. WS2008 was a curiousity, but WS2008 R2 exploded the consumption of Hyper-V. My bet is that there are still huge numbers of WS2008 R2 hosts out there that were never upgraded.

A Service Pack 1 release followed and added another significant feature: Dynamic Memory. With DM enabled, a virtual machine only consumed what physical memory, plus a buffer, that it need for current usage. I personally found this to be hugely beneficial and I think it helped with the marketing of Hyper-V, but I wonder how many end customers actually turned it on in production.

Windows Server 2012

The Sinofsky era was in full swing. Microsoft was in full “copy Apple” mode – which wasn’t great. I decided to attend the first Build conference. Officially, it was supposed to be a Windows 8 announcement event, but two things convinced me that there would be Server content:

  • The shared core OS where yper-Hyper-V lives (under the kernel)
  • There was some language about Window Server that I recognized

And yes, there was plenty of Windows Server content to keep me happy. Windows Server was focused on large enterprises and hosting companies with a huge number of improvements. What stood out for me were:

  • Maturity of clustering: The storage & clustering team did a lot of work to improve operations and high availability.
  • Networking: Converged networking, hardware offloads, and SMB 3.0 were huge changes. SMB 3.0 became Microsoft’s data center data transfer protocol. NVGRE provided software-defined networking, which powered the virtual networking of Azure.
  • Software-defined storage: Storage Spaces was introduced, eventually evolving into the foundation of Storage Spaces Direct in Azure Stack.
  • Live Migration for everyone: LM was no longer just for those with a cluster, although Storage Spaces made clustering much more affordable. Live Migration became a standard Hyper-V feature and was possible with non-clustered hosts too.

I spoke at the UK launch of Windows Server 2012. What struck me was the incredible interest in Hyper-V from Fortune 500s. Myself and another Irish MVP, Damian Flynn, set up whiteboards in the expo hall and were busier than any of the exhibitors. At the end of the day, we had a huge crowd semi-circling us and the venue staff had to kick us out to close the hall down!

That was cool, but a significant potential customer decided that Hyper-V was ready for the bigtime. This customer had developed their own hypervisor called Red-Dog but WS2012 convinced them to switch. And it was then, the Microsoft Azure switched to Hyper-V.

Windows Server 2012 R2

This release was a maturation of WS2012 Hyper-V – bigger customers were using Hyper-V in scaled out and multi-location deployments and the data and feedback helped improve the product. Once again, there was a laundry list of improvements.

Lots of those improvements were “little” things that are the sort of changes that you don’t notice, but the effects are hugely beneficial. I can remember having lunch with some PMs and they were giddy with the ideas of technical demos that they could show because of the sheer quantity of new features.

Clustering started on the path of saying “not every glitch is worth a failover” because failovers can be more disruptive than a glitch. Live Migration got faster with compression. SMB 3.0 got a larger role with Live Migration leveraging it to enable migrations using multiple high speed NICs. Port ACLs, probably used by a handful of people in a lab, enabled firewall rules in the virtual switch. Most of all, Linux became a first class citizen – live VM backup for Linux was introduced into the Linux kernel by Microsoft.

If you review the list of features, you might wonder who might benefit from them. Anyone using Azure probably is. RDMA is used in Azure. Converged networking is used in Azure. Port ACLs probably power Network Security Groups (NSGs). NVGRE and the hardware offloads, combined with Mellanox components which are used in Azure, improve networking performance.

Windows Server 2016

I look back on this release with a certain level of satisfaction. Right after the GA of WS2012 R2, Microsoft reached out to customers like I had never seen before. Customer feedback combined with Microsoft’s vision drove Hyper-V (and related roles/features) in WS2016.

Once of the big challenges I observed was the number of older loyal customers who were “stuck” on WS2008 R2 clusters that couldn’t be upgraded to something newer. Failover Clustering just didn’t support an in-place upgrade of any kind – you had to do a “swing migration” to a new cluster. WS2016 gave us a rolling cluster upgrade that worked beautifully. Hardware offloads improved performance again. Discrete Device Assignment allows virtual machines to connect directly to hardware (N-Series virtual machines in Azure). Hyper-V Manager finally got some improvements to help day-to-day operations.

Two big asks over the years were also addressed:

  • Hot-add and -remove of NICs was added.
  • A new software-defined backup system with built-in resilient change tracking finally stabilized the backup of Hyper-V virtual machines.

I’ve discussed the flow of features from Hyper-V into Azure. But that flow started to work the other way too. Host Resource Protection, winding down CPU resources to virtual machines that attack the host, and Network Controller (software-defined orchestration and management/security features) came from Azure … some of the code was literally copy/pasted thanks to the shared platform.

Another thing that came from Azure was Nano Server. Nano, a tiny version of Windows with no admin command interface, originated in Azure. It was supposed to replace Windows Server Full Desktop and Server Core in platform roles such as Hyper-V hosts and storage servers. It seemed cool, but the deployment system was like something from the 1970s and it was a nightmare to own. After a hard push, Microsoft eventually admitted defeat, and Nano’s future lay in a different feature of WS2016.

Docker had become “a thing”. When people talk about “Docker” they’re normally talking about Containers, a new form of virtualization (OS instead of machine) that can be managed by Docker, or other tools such as Kubernetes. In the early days of WS2016, Microsoft reached out to Docker and formed a partnership. The Hyper-V team, which spun off a Containers team, built Containers for Windows Server with Docker management. But they also built a more secure version of it based on Hyper-V partitions called Hyper-V Containers. A Hyper-V container, designed for untrusted code, requires a mini-kernel and nothing in the Windows portfolio is smaller than Nano Server – so this became the future of Nano Server.

This era of WS2016 and Windows 10 saw Hyper-V find new use cases. The partitions of Hyper-V provide a secure barrier between the parent partition (historically, the management/host OS) and child partitions (historically, the virtual machine). Windows 10 (and WS2016) got Credential Guard, allowing LSASS to run in a child partition. If malware tries to scan for passwords on the physical machine, it will find none because they are behind the barrier of Hyper-V partitioning. Windows 10 Application Guard uses the same technology to protect Edge, constraining any malware ran by the browser to the boundaries of the container, and isolated from the PC’s OS. This leveraging of partitions continues, more recently enabling Android to be emulated on Windows 10 for application developers.

Semi-Annual Channel

Microsoft recently started releasing Windows Server in a similar pattern to Windows 10. These Semi-Annual Channel (SAC) released come in the Spring and Autumn/Fall, and are named after the year and approximate month of release, such as 1709, 1803, and so on. Customers with Software Assurance that require frequent upgrades can opt into this release schedule to get the very latest features, but they have to do so with caution – some pieces might not be improved (or temporarily disappear) until the next release.

Two WS2016 features that have seen rapid improvement in the SAC releases are:

  • Hyper-converged infrastructure: You don’t need Nutanix to do HCI. Hyper-V can be deployed on commodity hardware using Storage Spaces Direct (S2D) and leverage amazing network and storage hardware to deliver crazy speeds. My employer started distributing a PCIe flash storage drive that can deliver 1 million 4K read IOPS!
  • Containers: Microsoft is all-on on Containers, building out functionality for Windows and Linux, Docker and Kubernetes.

Abandon-Ware

There are those who consider Windows Server to be abandon-ware. I guess it’s not helped by some members of the media that aren’t fully familiar with Windows Server. But the real blame for this goes to Microsoft itself. Windows Server didn’t even get mentioned during the keynotes of the last Microsoft Ignite. I don’t think any subsidiary has done a real launch since WS2012. Local Microsoft staff have sales targets, and Windows Server is a steady cash cow. Growth targets are focused on cloud services such as CRM 365, Office 365, and Azure, so local Microsoft reps only talk about those products. Even the staffing of subsidiaries reflects this – I doubt there’s a person in my local Microsoft Office who could tell me anything about the features of WS2016, and the reorg of last year was entirely focused on cloud services.

Is Windows Server dead?

In one word: No.

Windows Server has a huge market that will never go away. Some workloads can never go to the cloud. Microsoft hopes they will migrate to Azure Stack (which is powered by Windows Server Hyper-V) but Azure Stack will remain a niche, albeit a valuable one. And even those who migrate to the cloud, they’re either bringing their Windows Server licenses with them or getting new ones – that’s why Microsoft is adding two new ways to acquire Windows Server via their Cloud Solution Provider partner channel.

Windows Server 2019 is an accumulation of features of the SAC releases since WS2016 with some more. Today, you can participate in the preview via Windows Server Insiders. There are lots of new things and improvements, as were shared in the recent Windows Server Summit online event – it appears that this was so successful that the event will become a recurring one.

Another significant sign is Windows Admin Center (WAC). The built-in and Remote Server Administration Toolkit are built on MMC.EXE which has been deprecated for several generations of Windows. WAC was made generally available this year and has been going through cloud-speed improvements. Using this new shared HTML5 experience, you can manage your Windows servers, with growing support for Hyper-V, clustering, and other features. And yes, it does offer extensions into Azure.

Wrap Up

Hyper-V has a rich history – sometimes colorful. It’s been a great ride … and even if my job’s focus has switched mainly to Microsoft Azure, my knowledge of Hyper-V has made that switch easier, and it’s great to know that I work on the biggest Hyper-V clusters there are.

I’ve gotten to know several of the program managers of Hyper-V, Storage & Clustering, and Networking over the years. Every one of them has struck me as being intelligent, receptive to feedback, and ambitious to make their product they very best. They’re proud of their work, and rightly so.

Happy birthday Hyper-V, and thank you the team and your neighbors in the Redmond campus, past and present.

The post Happy 10th Birthday Hyper-V! appeared first on Petri.

Waze can get you a roadside tow in Europe

The content below is taken from the original ( Waze can get you a roadside tow in Europe), to continue reading please visit the site. Remember to respect the Author & Copyright.

Car trouble on the motorway is frustrating enough by itself, let alone if you don't have a roadside assistance plan to ease your worries. Waze may soon help get you out of a jam, though. The Google-owned navigation app is teaming up with Allianz Part…

Free E-Book: Software Defined Radio for Engineers

The content below is taken from the original ( Free E-Book: Software Defined Radio for Engineers), to continue reading please visit the site. Remember to respect the Author & Copyright.

We really like when a vendor finds a great book on a topic — probably one they care about — and makes it available for free. Analog Devices does this regularly and one you should probably have a look at is Software Defined Radio for Engineers. The book goes for $100 or so on Amazon, and while a digital copy has pluses and minuses, it is hard to beat the $0 price.

The book by [Travis F. Collins], [Robin Getz], [Di Pu], and [Alexander M. Wyglinski] covers a range of topics in 11 chapters. There’s also a website with more information including video lectures and projects forthcoming that appear to use the Pluto SDR. We have a Pluto and have been meaning to write more about it including the hack to make it think it has a better RF chip inside. The hack may not result in meeting all the device specs, but it does work to increase the frequency range and bandwidth. However, the book isn’t tied to a specific piece of hardware.

Make no mistake, the book is a college-level textbook for engineers, so it isn’t going to go easy on the math. So if the equation below bugs you, this might not be the book you start with:

[Di Pu] and [Alexander Wyglinksi] have an older similar book, and it looks like the lecture videos are based on that book (see video below). The projects section on the website doesn’t appear to have any actual projects in it yet, although there are a couple of placeholders.

We have enjoyed Analog’s book selections in the past including The Scientist and Engineer’s Guide to Digital Signal Processing which is a classic. If you visit their library you’ll find lots of books along with classes and videos, too.

If you want something a bit less academic, there’s always [Ossmann’s] videos. Or if you’d rather just use an SDR, there are plenty of inexpensive options to choose from.

Everything You Need to Know About Office 365 –- June 2018

The content below is taken from the original ( Everything You Need to Know About Office 365 –- June 2018), to continue reading please visit the site. Remember to respect the Author & Copyright.

Can you believe that June is over, meaning the year is halfway over? OMG. Where does the time go? Hopefully, it went to the beach where we should all be. This month’s updates cover some GDPR stuff, Teams reminding me it is important, and some other fun tidbits. Good news for you? I kept the snark to a minimum. You are welcome.

 

 

New GDPR Sensitive Information Types

I thought we were done with GDPR news. I guess not. This month, Microsoft announced new built-in sensitive information types to help you with your data governance and data protection policies bringing the grand total up to 87 types. Yikes, who knew there was so much sensitive info. The new types include EU Passport, driver’s license numbers, and a bunch of others that sound like the equivalent to the US social security number. For more information, read the announcement here.

Microsoft Teams Can Now Be Archived

Last month, I gave the opinion that investing some time in teams would be good for all of us. I got some reader feedback that agreed. Yay me! More proof we should know more is hot off the presses. Microsoft just announced that it is bringing archiving to Teams. That sounds like the more serious types thought of a tool they want to be around for the long term. So one of you go learn all of the Teams stuff and then teach me. Please?

OneDrive Message Center Gets an Update

Turns out that keeping up with the pace of change in OneDrive was wearing people out. So the OneDrive team has now committed to publishing a blog post twice a month with what’s coming, what has been released, and some timelines for future stuff. Pretty cool of them. Sounds like the type of blog post I can read and write. These post monthly…. wait a minute… Maybe I don’t like that I told you about this one. Assuming I don’t decide to delete this paragraph, you can find out more from Stephen Rose here, including the updates for June.

Exchange Hybrid Configuration Wizard Was Improved

Fellow author Tony Redmond wrote this little piece walking you through the updates to the wizard. He also had an interesting take that we needed these changes years ago. Hard to argue but I will say that we shouldn’t be so quick to dismiss its benefit today. I am not an Exchange expert but I play a SharePoint one on TV. And I can tell you the number of people who are still rocking SharePoint 2010 on-premises is mind-blowing. Just today, I had a call with a team looking to upgrade that to hybrid. There are still a lot of people that need to move to O365.

Did You Get Your Free Office 365 Developer Subscription

Do you find yourself wishing you had an O365 tenant that you can play in without consequence? I do. That is why I have like five of them. Anyway, if you go over to here, you can sign up to join the Office 365 Developer Program, even if you aren’t a developer, to get your own tenant. That way, when you see those crazy ideas you want to try but not in production, you have a place to do so. The tenants are good for 12 months, so imagine the fun you can have.

Make Sure You Are Taking Care of DNS

In this cloud-first1` world, DNS is more important than ever. Also, since sometimes DNS propagation can be slow you don’t want to be making willy-nilly changes like you did with your on-premises environment. To that end, my buddy Todd tripped over a great blog post from the Exchange Team on Best Practices for DNS. It seems like they put this together because DNS is the root of all evil and the root of the majority of their support calls. A lot of the things seem like common sense but they are good reminders. They also make you think when was the last time I checked my DNS? And whose job is it to monitor it? Not giving you answers, just points to ponder.

Ha! I made it without a single PowerApps post. You didn’t think I could do it did you?

 

Shane

The post Everything You Need to Know About Office 365 –- June 2018 appeared first on Petri.

Azure storage adds static HTML website hosting

The content below is taken from the original ( Azure storage adds static HTML website hosting), to continue reading please visit the site. Remember to respect the Author & Copyright.

Seven years after AWS S3, but just in time for serverless

Microsoft’s Azure Storage service has added an option to host static websites comprised of not much more than HTML, JavaScript and other client-side goodies.…

Office 365 capabilities that let you defend yourself from Cybercrime

The content below is taken from the original ( Office 365 capabilities that let you defend yourself from Cybercrime), to continue reading please visit the site. Remember to respect the Author & Copyright.

The rising incidents of cyber attacks across the globe call for increased protection on the internet user’s front. Robust security solutions equipped with right tools can help protect your devices, personal information, and files from being compromised. Microsoft Office already […]

This post Office 365 capabilities that let you defend yourself from Cybercrime is from TheWindowsClub.com.

what3words raisesfundsfrom saicandf1driver

The content below is taken from the original ( what3words raisesfundsfrom saicandf1driver), to continue reading please visit the site. Remember to respect the Author & Copyright.

What3words, a startup that has divided the entire world into 57 trillion 3-by-3 meter squares and assigned three words to each one, has disclosed three new investors all from the automotive world.

What3words announced Thursday that the venture arm of China’s largest auto group SAIC Motor, Formula 1 champion Nico Rosberg, and audio and navigation systems company Alpine Electronics have invested in the London-based company. Existing investor Intel Capital also participated in the round.

The latest funding round will be used to expand into new markets and product developments.

The investment, which was not disclosed, illustrates interest in the industry for technology that simplifies the user experience in cars, can be easily used with voice commands, and prepares companies for the age of autonomous vehicles. Since, the addressing system gives a unique three-word combination to a location it fixes a major flaw with a lot of voice-operated navigation systems: duplicate street names.

The company has assigned these 57 trillion squares a unique three-word name using an algorithm that has a vocabulary of 25,000 words. The system, which anyone can use via the what3words app, is available in more than a dozen languages. For instance, if you want to meet a friend in a specific corner of the Eiffel Tower in Paris, you can send the three-word address prices.slippery.traps. An Airbnb host might use a three word address to direct a guest to a tricky entrance. Someday, riders might be able to say or type in a three-word address to direct a self-driving car to drop them off a specific entrance at a large sports arena.

“This fund raise cements the direction this company is going,” What3words CEO Chris Sheldrick told TechCrunch. “Which is how, in the future, we are going to tell cars and devices and voice assistants where we’re going.”

Earlier this year, what3words disclosed that Daimler had taken 10% stake in the company. Daimler’s stake and these recently revealed investors are all part of its Series C funding round.

The company’s novel global addressing system has been integrated into in Mercedes’ new infotainment and navigation system—called the Mercedes-Benz User Experience or MBUX. The MBUX debuted on the new Mercedes A-Class, a hatchback that went on sale outside the U.S. in the spring. A sedan variant of the A-Class will come to the U.S. market in late 2018.

TomTom also announced plans last month to integrate what3words into its mapping and navigation products in the second half of this year. TomTom supplies its automotive navigation and traffic technology to car manufacturers, including Volkswagen, Fiat Chrysler, Alfa Romeo, Citroën and Peugeot.

The company is in talks with other automakers and suppliers to get what3words integrated into vehicle infotainment systems.

New – Amazon Linux WorkSpaces

The content below is taken from the original ( New – Amazon Linux WorkSpaces), to continue reading please visit the site. Remember to respect the Author & Copyright.

Over two years ago I explained why I Love my Amazon WorkSpace. Today, with well over three years of experience under my belt, I have no reason to return to a local, non-managed desktop. I never have to worry about losing or breaking my laptop, keeping multiple working environments in sync, or planning for disruptive hardware upgrades. Regardless of where I am or what device I am using, I am highly confident that I can log in to my WorkSpace, find the apps and files that I need, and get my work done.

Now with Amazon Linux 2
As a WorkSpaces user, you can already choose between multiple hardware configurations and software bundles. You can choose hardware with the desired amount of compute power (expressed in vCPUs — virtual CPUs) and memory, configure as much storage as you need, and choose between Windows 7 and Windows 10 desktop experiences. If your organization already owns Windows licenses, you can bring them to the AWS Cloud via our BYOL (Bring Your Own License) program.

Today we are giving you another desktop option! You can now launch a WorkSpace that runs Amazon Linux 2, the Amazon Linux WorkSpaces Desktop, Firefox, Evolution, Pidgin, and Libre Office. The Amazon Linux WorkSpaces Desktop is based on MATE. It makes very efficient use of CPU and memory, allowing you to be both productive and frugal. It includes a full set of tools and utilities including a file manager, image editor, and terminal emulator.

Here are a few of the ways that Amazon Linux WorkSpaces can benefit you and your organization:

Development Environment – The combination of Amazon Linux WorkSpaces and Amazon Linux 2 makes for a great development environment. You get all of the AWS SDKs and tools, plus developer favorites such as gcc, Mono, and Java. You can build and test applications in your Amazon Linux WorkSpace and then deploy them to Amazon Linux 2 running on-premises or in the cloud.

Productivity Environment – Libre Office gives you (or the users that you support) access to a complete suite of productivity tools that are compatible with a wide range of proprietary and open source document formats.

Kiosk Support – You can build and economically deploy applications that run in kiosk mode on inexpensive and durable tablets, with centralized management and support.

Linux Workloads – You can run data science, machine learning, engineering, and other Linux-friendly workloads, taking advantage of AWS storage, analytics, and machine learning services.

There are also some operational and financial benefits. On the ops side, organizations that need to provide their users with a mix of Windows and Linux environments can create a unified operations model with a single set of tools and processes that meet the needs of the entire user community. Financially, this new option makes very efficient use of hardware, and the hourly usage model made possible by the AutoStop running mode can further reduce your costs.

Your WorkSpaces run in a Virtual Private Cloud (VPC), and can be configured to access your existing on-premises resources using a VPN connection across a dedicated line courtesy of AWS Direct Connect. You can access and make use of other AWS resources including Elastic File Systems.

Amazon Linux 2 with Long Term Support (LTS)
As part of today’s launch, we are also announcing that Long Term Support (LTS) is now available for Amazon Linux 2. We announced the first LTS candidate late last year, and are now ready to make the actual LTS version available. We will provide support, update, and bug fixes for all core packages for five years, until June 30, 2023. You can do an in-place upgrade from the Amazon Linux 2 LTS Candidate to the LTS release, but you will need to do a fresh installation if you are migrating from the Amazon Linux AMI.

You can run Amazon Linux 2 on your Amazon Linux WorkSpaces cloud desktops, on EC2 instances, in your data center, and on your laptop! Virtual machine images are available for Docker, VMware ESXi, Microsoft Hyper-V, KVM, and Oracle VM VirtualBox.

The extras mechanism in Amazon Linux 2 gives you access to the latest application software in the form of curated software bundles, packaged into topics that contain all of the dependencies needed for the software to run. Over time, as these applications stabilize and mature, they become candidates for the Amazon Linux 2 core channel, and subject to the Amazon Linux 2 Long Term Support policies. To learn more, read about the Extras Library.

To learn more about Amazon Linux 2, read my post, Amazon Linux 2 – Modern, Stable, and Enterprise-Friendly.

Launching an Amazon Linux WorkSpace
In this section, I am playing the role of the WorkSpaces administrator, and am setting up a Linux WorkSpace for my own use. In a real-world situation I would generally be creating WorkSpaces for other members of my organization.

I can launch an Amazon Linux WorkSpace from the AWS Management Console with a couple of clicks. If I am setting up Linux WorkSpaces for an entire team or division, I can also use the WorkSpaces API or the WorkSpaces CLI. I can use my organization’s existing Active Directory or I can have WorkSpaces create and manage one for me. I could also use the WorkSpaces API to build a self-serve provisioning and management portal for my users.

I’m using a directory created by WorkSpaces, so I’ll enter the identifying information for each user (me, in this case), and then click Next Step:

I select one of the Amazon Linux 2 Bundles, choosing the combination of software and hardware that is the best fit for my needs, and click Next Step:

I choose the AutoStop running mode, indicate that I want my root and user volumes to be encrypted, and tag the WorkSpace, then click Next Step:

I review the settings and click Launch WorkSpaces to proceed:

The WorkSpace starts out in PENDING status and transitions to AVAILABLE within 20 minutes:

Signing In
When the WorkSpace is AVAILABLE, I receive an email with instructions for accessing it:

I click the link and set my password:

And then I download the client (or two) of my choice:

I install and launch the client, enter my registration code, and click Register:

And then I sign in to my Amazon Linux WorkSpace:

And here it is:

The WorkSpace is domain-joined to my Active Directory:

Because this is a managed desktop, I can easily modify the size of the root or the user volumes or switch to hardware with more or less power. This is, safe to say, far easier and more cost-effective than making on-demand changes to physical hardware sitting on your users’ desktops out in the field!

Available Now
You can launch Amazon Linux WorkSpaces in all eleven AWS Regions where Amazon WorkSpaces is already available:

Pricing is up to 15% lower than for comparable Windows WorkSpaces; see the Amazon WorkSpaces Pricing page for more info.

If you are new to WorkSpaces, the Amazon WorkSpaces Free Tier will let you run two AutoStop WorkSpaces for up to 40 hours per month, for two months, at no charge.

Jeff;

PS – If you are in San Francisco, join me at the AWS Loft today at 5 PM to learn more (registration is required).

 

10 of the world’s fastest supercomputers

The content below is taken from the original ( 10 of the world’s fastest supercomputers), to continue reading please visit the site. Remember to respect the Author & Copyright.

10 of the world’s fastest supercomputers
Network World [slideshow] - Top 10 Supercomputers 2018 [slide-01]

Image by Henrik5000 / Getty Images

The cream of the Top500 supercomputer list in June 2018, these 10 supercomputers are used for modeling the weather, weapons, ocean currents and other physical phenomena. They not only outperform every other machine on the planet, they also demonstrate the technologies your business might use, at a smaller scale, to get the best performance in the least space or using the least energy. [ Read more in-depth about these supercomputers at Supercomputing is becoming super-efficient. ]

To read this article in full, please click here

SiFive Releases Smaller, Lower Power RISC-V Cores

The content below is taken from the original ( SiFive Releases Smaller, Lower Power RISC-V Cores), to continue reading please visit the site. Remember to respect the Author & Copyright.

Today, SiFive has released two new cores designed for the lower end of computing. This adds to the company’s existing portfolio of microcontrollers and SoCs based on the Open RISC-V ISA. Over the last two years, SiFive has introduced a number of cores based on the RISC-V ISA, an Open Architecture ISA that gives anyone to design and develop a microcontroller or microprocessor platform. These two new cores fill out the low-power end of SiFive’s core portfolio.

The two new cores included in the announcement are the SiFive E20 and E21, both meant for low-power applications, and according to SiFive presentations, they’re along the lines of an ARM Cortex-M0+ and ARM Cortex-M4. This is a core — it’s not a chip yet — but since the introduction of SiFive’s first microcontrollers, many companies have jumped on the RISC-V bandwagon. Western Digital, for example, has committed to using the RISC-V architecture in SoCs and as controllers for hard drive, SSDs, and NASes.

The first chip from SiFive was the HiFive 1, which was based on the SiFive E31 CPU. We got our hands on the HiFive 1 early last year, and it is a beast. With the standard complement of benchmarks, in terms of raw power, it’s approximately twice as fast as the Teensy 3.6, based on the Kinetis K66, a 180 MHz ARM Cortex-M4F. The SiFive E31 is about 1.5 times as fast as the Teensy 3.6 on a pure calculations per clock basis. This is remarkable because the Teensy 3.6 is our go-to standard for when you want to toggle pins really really fast with a cheap, readily available microcontroller platform.

But sometimes you don’t need the fastest or best microcontroller. To that end, SiFive is looking toward a lower-power microcontroller based on the RISC-V core. The new offerings are built on the E2 Core IP series, with two standard cores. The E21 core provides mainstream performance for microcontrollers, and the E20 core is the most power-efficient core offered by SiFive. In effect, the E21 core is a replacement for the ARM Cortex-M3 and Cortex-M4, while the E20 is a replacement for the ARM Cortex-M0+.

Just a few months ago, SiFive released a gigantic, multicore, Linux-capable processor called the HiFive Unleashed. With support for DDR4 and Gigabit Ethernet, this chip would be more at home in a desktop than an Internet of Things thing. The most popular engine ever produced isn’t a seven-liter turbo diesel, it’s whatever goes into a Honda econobox; likewise, many more low-power microcontrollers like the Cortex-M0 and -M3 are sold than the newer, more powerful, and more expensive chips. Even though it’s not as exciting as a new workstation CPU, the world needs microcontrollers, and the more Open, the better.

Computers Go Hollywood

The content below is taken from the original ( Computers Go Hollywood), to continue reading please visit the site. Remember to respect the Author & Copyright.

Have you ever been watching a TV show or a movie and spotted a familiar computer? [James Carter] did and he created a website to help you identify which old computers appear in TV shows and movies. We came across this when researching another post about an old computer and wondered if it was any old movies. It wasn’t.

You can search by computer or by title. There are also ratings about how visible, realistic, and important the computer is for each item. The database only contains fictional works, not commercials or documentaries. The oldest entry we could find was 1950’s Destination Moon which starred a GE Differential Analyzer. Well, also John Archer, we suppose. We assume GE had a good agent as the same computer showed up in Earth vs. the Flying Saucers (1956) and When Worlds Collide (1951). You can see a clip of the computer’s appearance in Earth vs. the Flying Saucers, below.

We got excited when we didn’t see the Altair 8800 listed with The Six Million Dollar Man. But, alas, [James] has a list of things he hasn’t got around to yet and it is on that list. It is hard to tell which computer has the most screen credits, although we were amused to see how often the Burroughs B205 turned up, including in the Batcave.

We often spot some piece of gear other than a computer on the air, but we haven’t found a reference website for that. The old Battlestar Galactica had a fortune in Tektronix test equipment aboard. If you remember the show Buck Rogers in the 25th Century, Dr. Huer had an autoclave on his desk that one of the Hackday crew had in his lab at the same time.

We keep waiting for Mr. Robot to open up Hackaday on his tablet, but so far no joy. Of course, how computers are used on the screen can range from accurate to ridiculous. If you want to know which is which, it seems everyone has an opinion.

Fungii Turn Rice And Glass Waste Into An Eco-friendly Building Material

The content below is taken from the original ( Fungii Turn Rice And Glass Waste Into An Eco-friendly Building Material), to continue reading please visit the site. Remember to respect the Author & Copyright.

Making buildings out of fungus-based materials might sound like the sort of thing that’s best left to floppy-hatted little blue creatures. There’s a team of human scientists that have proven otherwise. Australian researchers […]

The post Fungii Turn Rice And Glass Waste Into An Eco-friendly Building Material appeared first on Geek.com.

Amazon Polly Plugin for WordPress Update – Translate and Vocalize Your Content

The content below is taken from the original ( Amazon Polly Plugin for WordPress Update – Translate and Vocalize Your Content), to continue reading please visit the site. Remember to respect the Author & Copyright.

Earlier this year I showed you how to Give Your WordPress Blog a Voice with Amazon Polly and walked you through the steps involved in installing, configuring, and using the Amazon Polly for WordPress plugin. Today we are making this plugin even more powerful, adding the ability to translate your content into one or more languages and to produce audio versions of each translation. The translation is implemented using Amazon Translate, a neural machine translation service that is part of our portfolio of machine learning services.

The original version of the plugin works like this:

And the new version works like this:

This version of the plugin supports translation of English-language web content into Spanish, German, French, and Portuguese, with plans to support other languages in the future.

Updating and Configuring the Plugin
My earlier post covered the steps involved in launching an Amazon Lightsail instance and setting up the plugin, and I won’t repeat them here. The first step is to edit my existing IAM policy so that it allows calls to the TranslateText function:

Then I log in to the WordPress Admin dashboard, click Plugins, and see that a new version is available:

I click update now, and wait a few seconds for the update. Then I click Settings to enable translation:

I click Enable translation support and Save Changes, then come back and set up the details. I select all of the available target languages, leave the voices and labels as-is, and click Save Changes to move forward:

Creating Translations and Vocalizations
Now I can create a new post and exercise the plugin. I enter the title and text for the post as usual:

Before moving forward, I can click How much will this cost to convert? to check on costs.

The price seems reasonable to me. I publish the post, and then click Translate to generate audio in 4 other languages. This happens in a matter of seconds:

The published post now includes a player that lets me listen to the original audio or any of the 4 translations:

Here are the audio versions:

English:
Spanish:
German:
French:
Portuguese:

I have lots of customization options. For example, I can enable transcripts of the translated text:

The transcripts are shown in the post:

I can change the labels that are used for each language:

Here are the updated labels:

I can also specify the Polly voice for each target language:

Now Available
The updated plugin is available now and you can start using it today! As you can see, it uses the “magic” of machine translation and text-to-speech to make your web content accessible to a wider audience, in both written and spoken form.

Jeff;

 

Dropbox Pioneers Future of Cloud Infrastructure with SMR Technology Deployment

The content below is taken from the original ( Dropbox Pioneers Future of Cloud Infrastructure with SMR Technology Deployment), to continue reading please visit the site. Remember to respect the Author & Copyright.

Dropbox , a leading global collaboration platform, announced a new chapter in the evolution of Magic Pocket, its custom-built storage infrastructure. Read more at VMblog.com.

A New Planner Apps Arrives to Modern SharePoint Online Sites

The content below is taken from the original ( A New Planner Apps Arrives to Modern SharePoint Online Sites), to continue reading please visit the site. Remember to respect the Author & Copyright.

As disclosed by Microsoft in the SharePoint Virtual Summit and in the SharePoint Conference North America, Microsoft is releasing an update to the way a Planner Plan can be integrated into a modern SharePoint Online (SPO) site. This update allows to easily add a Planner Plan as a full-page app in a modern SPO site. Bear in mind that this new Planner integration is not available in Communication Sites.

 

 

Adding a New Planner Plan to a Modern SPO Site

With the new Planner integration in modern SPO Sites, adding a new Planner Plan to an existing site is a very straightforward process:

  • From the site home page, just click on New -> Plan:

Addin a new Plan in a Modern SPO Site
Figure 1 — Adding a Planner Plan to a Modern SPO Site

  • A “Create a plan” panel is displayed. You can either create a new Plan or just choose an existing one. Provide the name for the new Plan and then click on “Create”:

"Create a plan" panel
Figure 2 — Create a Plan Panel

  • If you choose to use an existing Plan, then you will be asked to select an existing Plan linked to the underlying Office 365 Group:

Figure 3 — Adding an Existing Plan

In a similar way, by means of the modern Planner WebPart, it’s possible to add to any modern page in the Site and existing Planner Plan or create a new one.

Figure 4 — Existing Planner Plan Added to a Modern SPO Page

  • The result we get when adding a new Planner Plan to the underlying Group is a full-page App where we can start creating Planner buckets and add tasks.


Figure 5 — Planner App Showing the New Plan Created in the Modern SPO Site

Working with Tasks in the New Planner App

Once we have created the new Plan, it’s quite easy to start creating tasks and buckets to organize those tasks. Note: Of course, we can also edit existing tasks and buckets. We can change a bucket name or just the details of existing tasks:

  • To create a new task in a bucket, simply type a task name, set a due date, and assign the task to one or more site users:


Figure 6 — Adding a New Task to a Bucket

  • To edit an existing task, just click on the task name. The task details dialog is displayed. In this task details page, you can update task information.


Figure 7 — Editing an Existing Task

  • If we click on the Charts menu in the Planner App, we will see some insights in the performance of the Planner Plan. This is in the form of some charts where we can see the number of tasks by status, the number of tasks per bucket, and the task status or the number of tasks per Plan member.


Figure 8 — Charts View in the Planner App

  • Of course, when required, we can change the Group by combo in the App. We can change the visualization of the data in the Planner Plan.

Conclusion

The new Planner App provides an easy way to create new Plans linked to an existing modern SPO site or simply use an existing one. The App provides a full-page experience where users can work with a Planner Plan without having to leave the modern SPO site and browse Planner location in an Office 365 tenant.

Juan Carlos González

Office Servers and Services MVP | Modern Workplace Team Leader

The post A New Planner Apps Arrives to Modern SharePoint Online Sites appeared first on Petri.

World’s largest ARM supercomputer is headed to a nuclear security lab

The content below is taken from the original ( World’s largest ARM supercomputer is headed to a nuclear security lab), to continue reading please visit the site. Remember to respect the Author & Copyright.

Most supercomputers are focused on pure processing speed. Take the DOE's new Summit system, which is now the world's most powerful supercomputer, with 9,000 22-core IBM Power9 processors and over 27,000 NVIDIA Tesla V100 GPUs. But processing performa…

Partner Interconnect now generally available

The content below is taken from the original ( Partner Interconnect now generally available), to continue reading please visit the site. Remember to respect the Author & Copyright.

We are happy to announce that Partner Interconnect, launched in beta in April, is now generally available. Partner Interconnect lets you connect your on-premises resources to Google Cloud Platform (GCP) from the partner location of your choice, at a data rate that meets your needs.

With general availability, you can now receive an SLA for Partner Interconnect connections if you use one of the recommended topologies. If you were a beta user with one of those topologies, you will automatically be covered by the SLA. Charges for the service start with GA (see pricing).

Partner Interconnect is ideal if you want physical connectivity to your GCP resources but cannot connect at one of Google’s peering locations, or if you want to connect with an existing service provider. If you need help understanding the connection options, the information here can help.

In this blog we will walk through how you can start using Partner Interconnect, from choosing a partner that works best for you all the way through how you can deploy and start using your interconnect.

Choosing a partner

If you already have a service provider partner for network connectivity, you can check the list of supported service providers to see if they offer Partner Interconnect service. If not, you can select a partner from the list based on your data center location.

Some critical factors to consider are:

  • Make sure the partner can offer the availability and latency you need between your on-premises network and their network.
  • Check whether the partner offers Layer 2 connectivity, Layer 3 connectivity, or both. If you choose a Layer 2 Partner, you have to configure and establish a BGP session between your Cloud Routers and on-premises routers for each VLAN attachment that you create. If you choose a Layer 3 partner, they will take care of the BGP configuration.
  • Please review the recommended topologies for production-level and non-critical applications. Google provides a 99.99% (with Global Routing) or 99.9% availability SLA, and that only applies to the connectivity between your VPC network and the partner’s network.

Bandwidth options and pricing

Partner Interconnect provides flexible options for bandwidth between 50 Mbps and 10 Gbps. Google charges on a monthly basis for VLAN attachments depending on capacity and egress traffic (see options and pricing).

Setting up Partner Interconnect VLAN attachments

Once you’ve established network connectivity with a partner, and they have set up interconnects with Google, you can set up and activate VLAN attachments using these steps:

  1. Create VLAN attachments.
  2. Request provisioning from the partner.
  3. If you have a Layer 2 partner, complete the BGP configuration and then activate the attachments for traffic to start. If you have a Layer 3 partner, simply activate the attachments, or use the pre-activation option.

With Partner Interconnect, you can connect to GCP where and how you want to. Follow these steps to easily access your GCP compute resources from your on-premises network.

Related content

Google may be working on a way to run Windows 10 on a Pixel

The content below is taken from the original ( Google may be working on a way to run Windows 10 on a Pixel), to continue reading please visit the site. Remember to respect the Author & Copyright.

Google's Pixelbook is a high-end laptop that runs Chrome OS. If you're looking to do more with the hardware, like run Windows apps, you may soon be in luck. According to a report at XDA Developers (and picked up by 9to5Google), Google may in fact be…

The best webcams

The content below is taken from the original ( The best webcams), to continue reading please visit the site. Remember to respect the Author & Copyright.

By Andrew Cunningham and Kimber Streams

This post was done in partnership with Wirecutter. When readers choose to buy Wirecutter's independently chosen editorial picks, it may earn affiliate commissions that support its work. Read the full article h…

Customer Rewards

The content below is taken from the original ( Customer Rewards), to continue reading please visit the site. Remember to respect the Author & Copyright.

We'll pay you $1.47 to post on social media about our products, $2.05 to mention it in any group chats you're in, and 11 cents per passenger each time you drive your office carpool past one of our billboards.

Microsoft’s Office UI update includes a simpler, cleaner ribbon

The content below is taken from the original ( Microsoft’s Office UI update includes a simpler, cleaner ribbon), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft has given its infamous Office ribbon a much simpler, much less cluttered look as part of its interface redesign for Office.com and Office 365 applications. The tech giant has updated the element to only show the most basic options — if you…