HTTP vs. MQTT: A tale of two IoT protocols

The content below is taken from the original ( HTTP vs. MQTT: A tale of two IoT protocols), to continue reading please visit the site. Remember to respect the Author & Copyright.

When it comes to making software design decisions for Internet of Things (IoT) devices, it’s important to fit your functional requirements within the capabilities of resource-constrained devices. More specifically, you often need to efficiently offload data from the devices into the cloud, which eventually requires you to evaluate different communication protocols.

Google Cloud IoT Core currently supports device to cloud communication through two protocols: HTTP and MQTT. By examining the performance characteristics of each of these two protocols, you can make an informed decision on which is more helpful to your particular use case.

This post takes an experimental approach by collecting metrics on response time and packet size when sending identical payload through MQTT and HTTP, and by the variation of payload size and number of messages over one connection session. Doing it this way highlights some of the characteristics of—and differences between—the two protocols.

Setting up the experiment

We set up our experiment by using a single registry in Cloud IoT Core that accepts both HTTP and MQTT connections. The registry routes device messages to a single Pub/Sub topic that has one Cloud Functions endpoint as the subscriber: the Cloud Function simply writes the payload to log.

The end device is simulated on our laptop, which runs both a MQTT client and a HTTP client, and then measures the response time and tracks the packets sent over the wire.

setupoverview.png
Setup overview

Properties of the protocols

Before we go into the implementation details, let’s take a look at the differences between MQTT and HTTP that influence how the tests are set up

MQTT (Message Queuing Telemetry Transport), as the name suggests, is a publisher subscriber pattern, in which clients connect to a broker and the remote devices publish messages to a shared queue. The protocol optimizes towards message size, for efficiency.

HTTP adheres to the standard request response model.

To make a fair comparison between the two protocols, all the steps in the authentication process (handshake) need to be taken into account. For the MQTT case, this means that the connect and disconnect messages are measured sequentially with the actual data messages. Since there will be the overhead for the MQTT case, we want to send a different number of data messages between one connect-disconnect cycle and the next.

Trace packets sent over wire

To get a detailed view of the packet size being transmitted for both protocols, we used Wireshark.

Locust client implementation

We used Locust.io to perform load tests and to compile the metrics. Locust.io gives you a simple HTTP client from which to collect your timing data, whereas for the MQTT profiling, we tested with the Eclipse Paho MQTT client package, authenticated via JWT with Cloud IoT Core. The source code for the test is available here.

Let take a closer look of the MQTT Locust client. You’ll likely notice several things. First, an initial connect and disconnect is issued in the `on_start` function to preload the MQTT client with all the credentials it needs to connect with Cloud IoT Core, so that credentials can be reused in each measurement cycle.

When publishing messages, we use the qos=1 flag to ensure that the message was delivered by waiting for a pub_ack from Cloud IoT Core, which is comparable to the request response cycle of the HTTP protocol. Also the Paho MQTT client publishes all messages asynchronously, which forces us to call the wait_for_publish() function on the MQTTMessageInfo object to block execution until a PUBACK response is received for each message.

Test cases

MQTT

Varying the number of messages: We measured the response time for sending 1, 100, and 1000 messages over a single connection cycle each, and also captured the packet sizes that were sent over the wire.

Varying the size of messages: Here we measured the response time for sending a single message with 1, 10, and 100 property fields over a single connection cycle each, and then capture the packet size sent.

HTTP

Next we measured the average response time for sending a payload with 1, 10, and 100 property fields and then capture the packet size over the wire.

Results

MQTT response time

Below are the results of running both the HTTP and MQTT cases with only one simulated Locust user. The message transmitted is a simple JSON object containing single key-value pair.

1.png
2.png

HTTP response time

3.png

Packet size capturing results

To get a more accurate view of what packets are actually being sent over the wire, we used Wireshark to capture all packets transferred from and to the TCP port used by Locust.io. The sizes of each packet were also captured to give a precise measure on the data size overhead of both protocols.

MQTT

MQTT over TLS connecting procedure log .png
MQTT over TLS connecting procedure log

The wire log shows the handshake process that sets up a TLS tunnel for MQTT communication. The main part of this process consists of the exchange and verification of both certificates and shared secret.

Single message publish cycle.png
Single message publish cycle

The wire log over single message publishing cycle shows that there’s a MQTT publish message from client to server, a MQTT publish ACK message back to the client, plus the client also sends back a TCP ACK for the MQTT ACK received.

Disconnect procedure log.png
Disconnect procedure log

HTTP

Handshake procedure for establishing the TLS connection.png
Handshake procedure for establishing the TLS connection

The initialization procedure for setting up the TLS tunnel is the same for the HTTP case as it is for the MQTT case, and the now established secure tunnel is re-used by all subsequent requests.

Single publish event log.png
Single publish event log

The HTTP protocol is connectionless, meaning the JWT token is sent in the header for every publish event request and the Cloud IoT Core HTTP bridge will respond to every request.

The following table sums the packet size sent during each of the transfer states for both MQTT and HTTP:

4.png
And this table shows how variation in payload size affects packet size over wire:
5.png
Summary
6.png

Summary

diagram.png
The above diagram sums up the average amount of data transmitted per message for different numbers of messages transmitted over the same connection.
Transmission.png
Transmission time for different message sizes

Conclusion

Looking at the result that compares response time over one connection cycle for MQTT, we can clearly see that the initial connection setup increases the response time for sending single messages to the level that equals the response time of sending a single message over HTTP, which in our case rounds up to 120 ms per message. The contribution in terms of data amount sent over wire is even more significant for the MQTT case in which around 6300 bytes is sent for a single message, which is larger than for HTTP, which sums up to 5600 bytes. By looking at the packet traffic log, we can see that the dominant part—more than 90% of the data transmitted—is for setting up and tearing down the connection.

The real advantage of MQTT over HTTP occurs when we reuse the single connection for sending multiple messages in which the average response per message converges to around 40 ms and the data amount per message converges to around 400 bytes. Note that in the case of HTTP, these reductions simply aren’t possible.

From the result of the test for variation in payload size, we could observe that response times were kept constant as the payload size went up. The explanation here is that since the payloads being sent are small, the full network capacity isn’t utilized and as the payload size increases, more of the capacity is being used. Another observation we can make looking at the network packet log is that even as the amount of information packed into the payload increased by 10x and 100x, the amount of data actually transferred only increased by 1.8x respective 9.8x for MQTT and 1.2x and 3.4x for HTTP, which shows the effect of the protocol overhead when publishing messages.

The conclusion we can draw is that when choosing MQTT over HTTP, it’s really important to reuse the same connection as much as possible. If connections are set up and torn down frequently just to send individual messages, the efficiency gains are not significant compared to HTTP.

  • The greatest efficiency gains can be achieved through MQTT’s increase in information density for each payload message.

  • The most straightforward approach is to reduce the payload size where more data can be transmitted in each payload, which can be achieved through choosing proper compression and package methods based on the type of the data being generated. For instance, protobuf is an efficient way to serialize structured data.

For streaming applications, time-window bundling can increase the number of data points sent in each message, whereby choosing the window length wisely in relation to the data generation pace and available network bandwidth, you can transmit more information with lower latency.

In many IoT applications, the prior methods mentioned cannot easily be applied due to the hardware constraints of the IoT devices. Depending on the functional requirements from case to case, a viable solution would be the usage of gateway devices, with higher capabilities in terms of processing and memory. The payload data is initially delivered from the end device to the gateway, whereby different optimizing measures can be applied before further delivery to Google Cloud.

Next steps

Note: While this post intends to make comparisons between the two protocols, the actual response times depend on client network connectivity, the distance to the closest GCP edge node, where the IoT Core service is terminated, and the size of the transmitted message.

fontbase (2.6.4)

The content below is taken from the original ( fontbase (2.6.4)), to continue reading please visit the site. Remember to respect the Author & Copyright.

A blazing fast, beautiful and free font manager for designers

Liquidware announces FlexApp and ProfileUnity support for Amazon AppStream

The content below is taken from the original ( Liquidware announces FlexApp and ProfileUnity support for Amazon AppStream), to continue reading please visit the site. Remember to respect the Author & Copyright.

Liquidware , the leader in adaptive workspace management, today announced support for Amazon AppStream via its FlexApp, application layering, and… Read more at VMblog.com.

Brian Eno’s music creation app is coming to Android, 10 years late

The content below is taken from the original ( Brian Eno’s music creation app is coming to Android, 10 years late), to continue reading please visit the site. Remember to respect the Author & Copyright.

If you remember the early days of Apple's App Store, you might remember Bloom, Brian Eno's "generative music" app. It showed the potential of the smartphone as an artistic tool at a time when mobile apps were still novelties. Well, it's coming back w…

You Can Hire an Instagram ‘Sitter’ to Post Photos for You at This Swiss Hotel

The content below is taken from the original ( You Can Hire an Instagram ‘Sitter’ to Post Photos for You at This Swiss Hotel), to continue reading please visit the site. Remember to respect the Author & Copyright.

Scrolling through vacation pictures can give you major FOMO. However, a Swiss hotel line is offering hirable Instagram “sitters” to post photos for you on your European escape. Dubbed “Relax We Post,” the […]

The post You Can Hire an Instagram ‘Sitter’ to Post Photos for You at This Swiss Hotel appeared first on Geek.com.

Rights Management, Protection, and Email AutoSignatures

The content below is taken from the original ( Rights Management, Protection, and Email AutoSignatures), to continue reading please visit the site. Remember to respect the Author & Copyright.


Protection for the Masses

As part of its Microsoft 365 Information Protection initiative, Microsoft has done a good job to make encryption more accessible for Exchange Online users through the introduction of Sensitivity Labels and the Encrypt-Only and Do Not Forward options in Outlook and OWA. The net effect is that it’s easier than ever before for Office 365 users to protect their email with encryption and rights management.

Increased use of inbuilt encryption might see a reduction of the use of third-party encryption schemes like S/MIME or PGP, simply because sensitivity labels and the inbuilt protection are so easy to use. You don’t have to install add-ins for clients, there’s no need for key management, and any recipient on any email system can read a protected message. Everything works out of the box for Office 365 E3 and E5 tenants because the licenses for rights management are baked into these plans.

Encryption and Rights Management

Protecting email with easy-to-use encryption has many benefits, especially when combined with rights management. If a protected message reaches someone who shouldn’t have it, the assigned permissions will probably stop the recipient being able to read the message. Even if the permissions allow access (for instance, you can now assign a special Any Authenticated Users permission to allow any account authenticated with a Microsoft directory to access content) the sender can easily revoke access to the message. Another advantage is that no one can read a protected message if a recipient forwards it without the permission of the sender.

However, the advent of easy encryption has a knock-on effect on any third-party product that processes email. With that thought in mind, I’ve been looking at how autosignature products deal with protected email.

Autosignature Technology

Autosignature products allow companies to manage text and graphics inserted into outbound messages. The content usually includes the sender’s name, their contact details, some cheery text to promise dire consequences on anyone who accesses the email without authorization, and the company logo. It’s a matter of speculation as to how much of the Exchange Online mailbox databases are filled with company logos, but that’s not important right now.

Market Review

I contacted four autosignature vendors to ask them how they cope with protected email. Table 1 documents what I discovered.

Product Outcome
Code Two Email Signatures for Office 365 Can’t process protected email.
Crossware Mail Signature Client-side autosignatures support protected email; server-based processing can’t process protected email.
Exclaimer Cloud Signatures for Office 365 Client-side autosignatures support protected email; server-based processing can’t process protected email.
Symprex Email Signature Manager Can’t process protected email. [At least, Symprex logged my query and then did not respond further. I interpret this to be the case from their documentation]

Table 1: Popular Email autosignature products and protected email

Exclaimer’s client-side component supports Outlook for Windows. The Crossware client-side solution uses the Outlook manifest model and supports Outlook 2013 onward for Windows, Outlook 2016 for Mac, and OWA. Crossware allows users to preview an autosignature before insertion and has an “intelligent” mode that selects the best autosignature from a set based on the content of a message.

If your preferred client doesn’t support autosignatures, server-side processing is the only alternative. This is especially so for mobile clients like Outlook for iOS and Android, the native email apps found on mobile devices like the iOS mail app, and IMAP4/POP3 clients like Thunderbird. Given the widespread and increasing usage of mobile devices for email, server-side processing is very important for any company that wants to manage autosignatures.

Need for Change

Current autosignature products do a sterling job inserting text and graphics into unprotected email. But technology changes and new functionality brings new challenges. According to some of the companies I spoke with, they are working with Microsoft to figure out how to process protected email in the cloud.

Hopefully, Microsoft and the ISVs come up with a scheme to allow autosignatures to be applied to protected messages in transit. Because processing happens in transit, it can be managed centrally, and all clients are supported. Until this functionality is available, client-side injection is the only way to add autosignature text to protected messages, unless you use a transport rule.

Transport Super-User

Exchange Online transport (mail flow) rules can insert text even to protected messages. The Exchange transport service has super-user privilege for rights management, which it uses to decrypt protected messages, insert text, and then encrypt the messages again for onward transmission. It’s possible that Microsoft might allow ISVs to use something like super-user privilege to insert autosignatures. There’s no reason why this wouldn’t work but allowing third-party products to decrypt and encrypt messages is a big step that not all customers might be happy with.

Future Solutions

Like any other technology, it will take some time before Office 365 tenants fully embrace sensitivity rules and make more extensive use of rights management and encryption to protect email. While the brains trust in the ISVs figure out how to best deal with protected email, if you’re interested in using autosignatures with protected email now, you should check out the available products and go with the one that offers the best all-round solution for both protected and unprotected email.

The post Rights Management, Protection, and Email AutoSignatures appeared first on Petri.

Fix Surface Go issues using USB Recovery Disk

The content below is taken from the original ( Fix Surface Go issues using USB Recovery Disk), to continue reading please visit the site. Remember to respect the Author & Copyright.

fix Surface Go issues using USB Recovery Disk

Surface Go is the most affordable Surface Tablet yet. It is a lightweight yet powerful computing device for one to use. However, like any other computing device, you might face errors while using it. And to fix these issues, if […]

This post Fix Surface Go issues using USB Recovery Disk is from TheWindowsClub.com.

Google Assistant will now be nicer if you say “Please” and “Thank you”

The content below is taken from the original ( Google Assistant will now be nicer if you say “Please” and “Thank you”), to continue reading please visit the site. Remember to respect the Author & Copyright.

Is barking orders at our AI-powered voice assistants turning us into jerks? Are our kids growing accustomed to commanding the Google Homes and Siris of the world to turn on the lights without the politeness of a “please”?

Perhaps. With that in mind, from here on out, Google Assistant will be a bit more cheery if you take the time to say “please” or “thank you”.

If, for example, you say “Hey Google, please set a timer for 10 minutes”, Google Assistant will respond “Thanks for asking so nicely! 10 minutes, starting now.”

To be clear: it’s totally optional. Prefer to be curt? That’s okay — Assistant won’t chastise you. But if these are habits you’re trying to instill in a little one or polish up yourself, Assistant will respond in kind.

Google first mentioned that they’d be adding this feature, which they call “Pretty Please”, at I/O in May. Amazon rolled an almost identical feature into Alexa back in April of this year.

Google Assistant is picking up a few other new tricks this morning:

  • You can say “Hey Google, Call Santa” to have a chat with ol’ Saint Nick himself. That’s been around a while, but they’ve added a new storyline (and visuals, if you have an Assistant device with a display.)
  • They’ve added a bunch of Christmas stories to Assistant’s story time feature, so you can say things like “Hey Google, read me ‘Twas the night before Christmas’”, or ‘Hey Google, tell me a Christmas story’ to kick things off.
  • You can now add things to a specific list, such as your wish list or Christmas dinner shopping list, via Assistant. Say “Hey Google, add ‘Kindle for mom’ to my gift list” and it’ll know what you mean. You can also create new lists via Assistant, and Google notes that they’ll soon support third party services Any.do, Bring!, and Todoist.
  • If you have a Assistant device with a display built-in, it can now double as a lil’ Karaoke machine — ask it to play a song via Google Play Music, and the lyrics should now appear in sync.
  • If you have an Assistant device with a display built-in and a Nest doorbell, you can now chat with whoever’s at the door thanks to a new two way talk button.

    New – Automatic Cost Optimization for Amazon S3 via Intelligent Tiering

    The content below is taken from the original ( New – Automatic Cost Optimization for Amazon S3 via Intelligent Tiering), to continue reading please visit the site. Remember to respect the Author & Copyright.

    Amazon Simple Storage Service (S3) has been around for over 12.5 years, stores trillions of objects, and processes millions of requests for them every second. Our customers count on S3 to support their backup & recovery, data archiving, data lake, big data analytics, hybrid cloud storage, cloud-native storage, and disaster recovery needs. Starting from the initial one-size-fits-all Standard storage class, we have added additional classes in order to better serve our customers. Today, you can choose from four such classes, each designed for a particular use case. Here are the current options:

    Standard – Designed for frequently accessed data.

    Standard-IA – Designed for long-lived, infrequently accessed data.

    One Zone-IA – Designed for long-lived, infrequently accessed, non-critical data.

    Glacier – Designed for long-lived, infrequent accessed, archived critical data.

    You can choose the applicable storage class when you upload your data to S3, and you can also use S3’s Lifecycle Policies to tell S3 to transition objects from Standard to Standard-IA, One Zone-IA, or Glacier based on their creation date. Note that the Reduced Redundancy storage class is still supported, but we recommend the use of One Zone-IA for new applications.

    If you want to tier between different S3 storage classes today, Lifecycle Policies automates moving objects based on the creation date of the object in storage. If your data is stored in Standard storage today and you want to find out if some of that storage is suited to the S-IA storage class, you can use Storage Class Analytics in the S3 Console to identify what groups of objects to tier using Lifecycle. However, there are many situations where the access pattern of data is irregular or you simply don’t know because your data set is accessed by many applications across an organization. Or maybe you are spending so much focusing on your app, you don’t have time to use tools like Storage Class Analysis.

    New Intelligent Tiering
    In order to make it easier for you to take advantage of S3 without having to develop a deep understanding of your access patterns, we are launching a new storage class, S3 Intelligent-Tiering. This storage class incorporates two access tiers: frequent access and infrequent access. Both access tiers offer the same low latency as the Standard storage class. For a small monitoring and automation fee, S3 Intelligent-Tiering monitors access patterns and moves objects that have not been accessed for 30 consecutive days to the infrequent access tier. If the data is accessed later, it is automatically moved back to the frequent access tier. The bottom line: You save money even under changing access patterns, with no performance impact, no operational overhead, and no retrieval fees.

    You can specify the use of the Intelligent-Tiering storage class when you upload new objects to S3. You can also use a Lifecycle Policy to effect the transition after a specified time period. There are no retrieval fees and you can use this new storage class in conjunction with all other features of S3 including cross-region replication, encryption, object tagging, and inventory.

    If you are highly confident that your data is accessed infrequently, the Standard-IA storage class is still a better choice with respect to cost savings. However, if you don’t your access patterns or if they are subject to change, Intelligent-Tiering is for you!

    Intelligent Tiering in Action
    I simply choose the new storage class when I uploads objects to S3:

    I can see the storage class in the S3 Console, as usual:

    And I can create Lifecycle Rules that make use of Intelligent-Tiering:

    And that’s just about it. Here are a few things that you need to know:

    Object Size – You can use Intelligent-Tiering for objects of any size, but objects smaller than 128 KB will never be transitioned to the infrequent access tier and will be billed at the usual rate for the frequent access tier.

    Object Life – This is not a good fit for objects that live for less than 30 days; all objects will be billed for a minimum of 30 days.

    Durability & Availability – The Intelligent-Tiering storage class is designed for 99.9% availability and 99.999999999% durability, with an SLA that provides for 99.0% availability.

    Pricing – Just like the other storage classes, you pay for monthly storage, requests, and data transfer. Storage for objects in the frequent access tier is billed at the same rate as S3 Standard; storage for objects in the infrequent access tier is billed at the same rate as S3 Standard-Infrequent Access. When you use Intelligent-Tiering, you pay a small monthly per-object fee for monitoring and automation; this means that the storage class becomes even more economical as object sizes grow. As I noted earlier, S3 Intelligent-Tiering will automatically move data back to the frequent access tier based on access patterns but there is no retrieval charge.

    Query in Place – Queries made using S3 Select do not alter the storage tier. Amazon Athena and Amazon Redshift Spectrum access the data using the regular GET operation and will trigger a transition.

    API and CLI Access – You can use the storage class INTELLIGENT_TIERING from the S3 CLI and S3 APIs.

    Available Now
    This new storage class is available now and you can start using it today in all AWS Regions.

    Jeff;

    PS – Remember the trillions of objects and millions of requests that I just told you about? We fed them into an Amazon Machine Learning model and used them to predict future access patterns for each object. The results were then used to inform storage of your S3 objects in the most cost-effective way possible. This is a really interesting benefit that is made possible by the incredible scale of S3 and the diversity of use cases that it supports. There’s nothing else like it, as far as I know!

    outlookattachview (3.15)

    The content below is taken from the original ( outlookattachview (3.15)), to continue reading please visit the site. Remember to respect the Author & Copyright.

    View/Extract/Save Outlook Attachment

    How to install ChromeOS on old laptop using Chromefy

    The content below is taken from the original ( How to install ChromeOS on old laptop using Chromefy), to continue reading please visit the site. Remember to respect the Author & Copyright.

    ChromeOS is a very light operating system. Google has been working hard at promoting their own Chromebook tablets that work on this ChromeOS operating system and is directly competing with Microsoft’s Windows and Apple’s MacOS. ChromeOS is based on Linux […]

    This post How to install ChromeOS on old laptop using Chromefy is from TheWindowsClub.com.

    Airbnb is using what3words to list stays with Mongolian nomads

    The content below is taken from the original ( Airbnb is using what3words to list stays with Mongolian nomads), to continue reading please visit the site. Remember to respect the Author & Copyright.

    Putting a new spin on the term “digital nomad,” U.K. ‘digital nomad’, UK addressing platform what3words has partnered with Airbnb to list stays with Mongolian nomads.

    The startup’s simplified addressing system is being applied to help adventurous travelers “home share” travellers ‘home share’ with Dukha reindeer herders at their mountain camp where there aren’t any street names to anchor a trip. 

    The partnership is slated as supporting sustainable tourism by helping the tribe tap into a new revenue stream to support its traditional way of life.

    Earlier this year, Airbnb signed a Memorandum of Understanding with the Ministry of Environment and Tourism of Mongolia to use home sharing as a route for economic empowerment and community development. “The MOU will see both parties provide hospitality training for current hosts, as well as potential hosts in rural and remote areas, to encourage the adoption of new digital technology for tourism,” they note in a press release today.

    The Airbnb listing with the Dukha reindeer herders offers the chance to stay in a teepee in the Taiga forest in Northern Mongolia, with guests getting “two wooden beds, sleeping bags and an open-fire stove for heating and cooking, as well as full access to the reindeer tribe’s backyard.” backyard”.

    So definitely not the usual Airbnb fare.

    what3word comes into play as a neat tool because guests are asked to meet the nomadic tribe at a previously communicated three-word 3 word address at the edge of the forest.

    what3word’s platform chunks the world into 57 trillion 3-by-3 meter squares — each of which has been assigned three words to act as its easier-to-share easier to share pinpoint. Using unique combinations of words for geolocation reduces the risk of confusing two similar-sounding similar sounding street names, for example, and means a location can easily be shared verbally or read at a glance. 

    After meeting their hosts at the forest edge, guestsascend with them to the mountain to the camp — located at ///evaluate.video.nails — either on reindeer or by horse.

    what3words addressing platform used to pinpoint a nomadic tribe’s Airbnb listing in Mongolia

    Travelers Travellers can then expect to be“immersed in the day-to-day life of the tribe, from herding and milking reindeer to cooking traditional Mongolian dishes and making handicrafts,” handicrafts”, they add.

    The tribe uses a co-host in an urban location to manage the process of updating their Airbnb listing with a new three-word 3 word address, as needed (i.e. when they shift the location of their camp).

    Commenting on the partnership in a statement, Cameron Sinclair, social innovation lead at Airbnb, said:  “Airbnb is excited to partner with what3words and the Ministry of Environment and Tourism of Mongolia to drive sustainable tourism and economic empowerment, while promoting the unique hospitality and culture so intrinsic to the country.

    “In Mongolia, a lack of traditional street addressing and nomadic way of life have prevented locals from welcoming Airbnb guests into their homes. Our partnership delivers an innovative way to provide hosts with an accurate and reliable address while constantly on the move, and creates new livelihood opportunities for nomadic and rural communities in Mongolia and around the world.”

    A spokeswoman for what3words confirmed the partnership is limited to Mongolia for now — but added it’s “exploring next steps with Airbnb” in the hopes of expanding the collaboration.

    There’s no financial component to the arrangement as yet because what3words is free for individuals to use (so in this case the Dukha reindeer herders). 

    But the startup does sell b2b licenses for other products, including its API and SDKs — offering optional extras like very large-scale batch conversion of three-word 3 word addresses to GPS coordinates or vice versa.

    So if Airbnb sees enough value in ramping up offers of alternative tourist experiences on its platform — and in what3words’ three-word 3 word address system as the easiest way to grease and thus scale that pipe — it could end up sending something more than a bit of publicity the startup’s way.

    That’s clearly what3words’ hope.

    “what3words is incredibly useful for guests trying to find their Airbnb — be it in the centre of Madrid, or the on the Mongolian Steppe,” the spokeswoman told us. “We’re already seeing many hosts provide guests with their 3 word address, and we’d love to make the process as seamless as possible.”

    One growth headwind for Airbnb’s business could work in what3words’ favor because the home-sharing platform may well need to invest in finding innovative and sustainable routes to grow its business, given a growing backlash against overtourismin popular destination cities that have been saddled withthe real-world impacts of homes being repurposed as de facto hotels (and travel generally being more affordable).

    In recent years residents in cities where Airbnb is popular have been vocal in complaining that such platforms bring problems — from antisocial impacts such as noise and drunken partying to more structural issues as they contribute to driving up rents by removing housing stock, with the risk of undermining local communities if residents get priced out.

    And a growing number of cities have responded to these concerns by tightening regulations on home-sharing — throwing up blockers and sometimes hard caps on Airbnb’s growth.

    A requirement that hosts register with the city in San Francisco so it can enforce vacation-rental laws to prevent homes being repurposed as year-round tourist lets led to a dramatic decline in Airbnb listings at the start of this year, for example.

    But it looks to be the opposite story in Mongolia where politicians are focused on development and keen to attract outside investment. And where tourists are — at least for now — welcome visitors.

    Amazon Echo devices can now make Skype calls

    The content below is taken from the original ( Amazon Echo devices can now make Skype calls), to continue reading please visit the site. Remember to respect the Author & Copyright.

    Video chat was always one of Echo Show and Spot’s biggest selling points. But until now, the products have been tethered to Amazon’s own software. This week, however, the company took another big step in its ongoing relationship with Microsoft by adding Skype calling to mix.

    Now just about every Echo device past and present is are able to make calls using the popular platform. Your Echo/Plus/Dot, et al. will be able to do so via voice using a command like, “Alexa Skype Mom.” Echos with displays, meanwhile, will offer up the full video Skype experience. Users can also ask Alexa to dial a phone number via Skype.

    It’s a solid partnership for the two companies. Amazon could use better chat support and Microsoft hasn’t made much headway with Cortana-enabled devices. This is also a bit of a blow for Facebook, whose Portal devices are built almost entirely around the idea of offering a standalone home product for video chat. That’s the best and practically only killer app on Facebook’s offering at present.

    The feature can be set up through the Alexa mobile app.

    HyperSurfaces turns any surface into a user interface using vibration sensors and AI

    The content below is taken from the original ( HyperSurfaces turns any surface into a user interface using vibration sensors and AI), to continue reading please visit the site. Remember to respect the Author & Copyright.

    Imagine any surface, such as a wooden table, car door or glass wall, could be turned into a user interface without the need for physical buttons or a touch screen. That’s the ambition of HyperSurfaces, the London startup originally behind the Mogees line of music devices and software, which today is unveiling what it claims is a major breakthrough in UI technology.

    Dubbed “HyperSurfaces,” “HyperSurfaces,” the new technology, for which the company has four related patents pending, combines vibration sensors and the latest developments in machine learning/AI to transform any object of any material, shape and size into an intelligent object able to recognise physical interactions.

    Equally important is that once trained for a particular object, the HyperSurfaces neural network-trained algorithms are able to run on dedicated microchips that don’t require connection to the cloud for processing. This means that gestures can be instantly recognised and in turn trigger specific commands entirely locally and at much lower cost.

    The idea, co-founder and CEO Bruno Zamborlin tells me, is to merge “the physical and the data worlds” in a more seamless way than has been previously possible, ridding us of unnecessary keyboards, buttons and touch screens.

    “The HyperSurfaces algorithms belong to the current state of the art in deep learning research,” he explains. “On top of this, the computational power of microchips literally exploded over the last years allowing for machine learning algorithms to run locally in real-time whilst achieving a bill of material of just a few dollars. These applications are possible now and were not possible 3 or 5 years ago.” ago”.

    Zamborlin says it is difficult to imagine what the applications of HyperSurfaces technology might end up being, in a similar way as it was difficult to imagine 10 years ago all of the applications a mobile phone could enable. enable ten years ago. The most immediate ideas include the possibility of creating technological objects made of materials that until now haven’t been associated with technology at all, such as wood, glass glass, and different kinds of metal etc.

    “Imagine a new wave of 3D wooden IoT devices,” he says, only half jokingly.

    This could result in a wooden kitchen table becoming the controller for your living room smart lights and smart thermostat. Or perhaps your home’s floor becomes an advanced security system able to accurately distinguish the steps of a thief from those of your cat. HyperSurfaces has also already seen a lot of interest from car manufacturers.

    “Other initial applications will probably include accommodating the desire of car manufactures to eliminate buttons and switches from their car doors and cockpits, cockpits creating a brand new experience for the user,” adds Zamborlin. “We are used to flat plastic surfaces, but this won’t be a requirement

    anymore.”The anymore”.

    HyperSurfaces team

    To get this far — the video demos are very impressive and can’t help but fire your imagination — HyperSurfaces (then called Mogees) raised $1.1 million in seed funding about a year ago and has been heads head down ever since. This included Zamborlin recruiting a team of top AI scientists and completely re-focusing on research and development. “They are all from Goldsmiths [University of London], like myself, where we specialise in the niche of AI for real-time interaction,” he says.

    The best ultraportable laptops of 2018

    The content below is taken from the original ( The best ultraportable laptops of 2018), to continue reading please visit the site. Remember to respect the Author & Copyright.

    When Steve Jobs first pulled the original MacBook Air out of a manilla envelope in 2008, the tech world dropped its collective jaw. A laptop that could fit in such a small package? Groundbreaking. With a three-pound weight and tapered silhouette that…

    Prepare For The ITIL Certification Exams With This $49 Training Bundle

    The content below is taken from the original ( Prepare For The ITIL Certification Exams With This $49 Training Bundle), to continue reading please visit the site. Remember to respect the Author & Copyright.

    Enterprise demands for IT services have grown considerably, so considering a role in IT service management (ITSM) can be a stable and lucrative career move. However, not just anyone can become an IT service manager; it requires earning IT certifications, which demand rigorous preparation. Not to mention, there are numerous courses to choose from online. If you want to earn an IT certification, this Ultimate ITIL Certification Training Bundle by Integrity Training can help you prepare for $49.

    To read this article in full, please click here

    Hack My House: Opening Raspberry Pi to the Internet, but Not the Whole World

    The content below is taken from the original ( Hack My House: Opening Raspberry Pi to the Internet, but Not the Whole World), to continue reading please visit the site. Remember to respect the Author & Copyright.

    If you’ve followed along with our series so far, you know we’ve set up a network of Raspberry Pis that PXE boot off a central server, and then used Zoneminder to run a network of IP cameras. Now that some useful services are running in our smart house, how do we access those services when away from home, and how do we keep the rest of the world from spying on our cameras?

    Before we get to VPNs and port forwarding, there is a more fundamental issue: Do you trust your devices? What exactly is the firmware on those cheap cameras really doing? You could use Wireshark and a smart switch with port mirroring to audit the camera’s traffic. How much traffic would you need to inspect to feel confident the camera never sends your data off somewhere else?

    Thankfully, there’s a better way. One of the major features of surveillance software like Zoneminder is that it aggregates the feeds from the cameras. This process also has the effect of proxying the video feeds: We don’t connect directly to the cameras in order to view them, we connect to the surveillance software. If you don’t completely trust those cameras, then don’t give them internet access. You can make the cameras a physically separate network, only connected to the surveillance machine, or just set their IP addresses manually, and don’t fill in the default route or DNS. Whichever way you set it up, the goal is the same: let your surveillance software talk to the cameras, but don’t let the cameras talk to the outside world.

    Edit: As has been pointed out in the comments, leaving off a default route is significantly less effective than separate networks. A truly malicious peice of hardware could easily probe for the gateway.

    This idea applies to more than cameras. Any device that doesn’t need internet access to function, can be isolated in this way. While this could be considered paranoia, I consider it simple good practice. Join me after the break to discuss port forwarding vs. VPNs.

    For the Lazy: Port Forwarding

    There are two broad categories of solutions to the problem of remote access, the first being port forwarding. Simply forwarding the assigned port from your router to the server is certainly easiest, but it’s also the least secure. The logs of a port 80 (HTTP) or 22 (SSH) exposed to the internet are frightening to sift through. And before you assume you’ll never be found in the vast sea of open ports, I present Robert Graham and masscan. Able to scan every IP address for a given port in just a few minutes, you’re not hiding that web server for long.

    Stories like the libssh vulnerabilityare constant reminders of the dangers of leaving services open to the internet. If you opt to go this route, you must stay on top of security updates, use very strong passwords, and hope for the best. Fail2ban, virtualization, and good backups are all invaluable tools for standing up to the onslaught that is the public internet. Suffice it to say, I don’t recommend this approach for a home user.

    One of the alternatives is to simply use a different external port number. Port 8080 for an HTTP server is a popular choice, which makes it a bad choice when trying to hide the service. Using a random port between 10,000 and 65,000 is a strategy known as security by obscurity. Someone with a grudge who knew your IP address will find it. But random scans through every possible port at every possible IP address make finding this more of a needle in a haystack problem.

    Another technique that falls under the port forwarding category is port knocking. A series of TCP SYN packets (initial connection attempts) sent to a predetermined list of ports signals to a listening daemon which then allows your connection through the firewall. While clever, port knocking is subject to replay attacks and a few other problems. If the cleverness of port knocking appeals to you, there is a more modern take on the idea in the form of Fwknop(Full disclosure, I do some of the development on this project).

    For Peace of Mind: VPN

    Another approach to secure remote access is a VPN. A Virtual Private Network has been described as stretching a ridiculously long Ethernet cable through the internet. Traffic is encrypted between the endpoints, and because a VPN creates a virtual network adapter, a remote user can access the network as if they were physically connected to it.

    OpenVPN is the time-tested contender in the open source VPN space, and has a good track record. It’s based on TLS, can run over TCP or UDP, and is widely used. There is good support for Android, iPhone, Windows, MacOS, and Linux. The configuration can be a challenge to work through, but once it’s up and running, OpenVPN is a great solution.

    Wireguard is a much newer project, and aims to be simpler than OpenVPN. I’ve opted to use Wireguard, which has good Android and Linux support. In fact, Wireguard has been making steady progress towards being officially included in the Linux kernel. There is already macOS support through MacPorts and Homebrew, and their iOS app just entered beta. Wireguard uses more modern cryptography, aiming to be faster and more secure than the alternatives.

    Both VPNs are supported by OpenWrt, and there are advantages to hosting the VPN on the router. When hosting a VPN on another machine on the local network, I have two pieces of advice, as these two problems trip me up repeatedly. The first is to enable IPv4 forwarding on the VPN machine. The second is that other devices on the network must have routes configured to get to the VPN network, in order for traffic to flow in either direction. Without that static route, packets addressed to the VPN are sent to the default router instead of the VPN server.

    Running Silent

    One final consideration is whether you want your IP address to be completely silent. Called a “black hole”, or “stealth mode”, this is when your IP address doesn’t respond to pings, and instead of responding to connection attempts with ICMP error packets, those incoming packets are dropped silently. The goal here is that to a network scan, your IP address appears to be unassigned with nothing listening for traffic.

    OpenVPN and Wireguard both support this style of quiet operation when in UDP mode. In the case of OpenVPN, a Hash-based Message Authentication Code (HMAC) allows incoming UDP packets to be quickly dropped when they aren’t signed with a valid HMAC. Wireguard was written with this sort of radio silence as a design goal, and doesn’t require any extra configuration to enable it.

    Why It’s Worth Thinking About

    This has obviously been an overview, and there are many clever ideas we haven’t touched on. I often turn to Fwknop combined with SSH tunneling when I don’t want to set up a full VPN. IPSec via the Strongswan project is still a valid solution, and there are many others.

    Most important, don’t thoughtlessly let the entire world try logging in to your house. Part of the appeal of hacking a house is to be able to monitor things while on vacation, but you don’t want to always feel like somebody’s watching you!

    Up next, we’ll look at a networked garage door opener, and talk about the ways to work around a vendor’s proprietary protocol, and maybe a primer on how to reverse engineer it.

    Four operational practices Microsoft uses to secure the Azure platform

    The content below is taken from the original ( Four operational practices Microsoft uses to secure the Azure platform), to continue reading please visit the site. Remember to respect the Author & Copyright.

    This is the fourth blog in a 4-part blog post series on how Microsoft Azure provides a secure foundation.

    Microsoft provides you with a secure foundation to host your infrastructure and applications. In the last blog in this series on infrastructure security, I shared top customer concerns about high investment without clear ROI and challenges of retaining security experts. In this blog, I discuss how Microsoft Azure can help you gain security expertise without additional investment through our operational best practices and a global team of over 3,500 cybersecurity experts. Today, we are going to look at the different operational practices our security experts follow to help ensure your data is secure.

    1. Secure deployment practices

    The Security Development Lifecycle (SDL) is a collection of industry-recognized best practices that address the seven-phases of software development. It helps our developers build more secure software and meet security compliance requirements.

    Our developers follow the SDL to ensure they are meeting core security principles throughout development, resolving security issues before their code is deployed, and adhering to the security standards used by all software developed for the Azure platform.

    The SDL is a repeatable process that all software development teams within Azure use to help ensure that code is secure during and after deployment. This process has helped us drastically reduce the number of vulnerabilities in our code.

    In addition, the SDL helps improve compliance by putting privacy and compliance at the beginning of the development processes. This forces developers to address challenges with encryption, data location, personally identifiable information (PII), logging, auditing, or other security issues before anything is deployed.

    We also encourage you, as you develop your code in Azure, to follow the SDL to help ensure your code is secure and compliant.

    2. Restricted and just-in-time administrator access and secure access workstations

    In addition to securing your code, Azure operations and security professionals also work to protect your data from unauthorized access. This includes implementing controls that restrict unauthorized access from Microsoft personnel and contractors.

    For a rare security issue where a Microsoft employee needs to access your Azure infrastructure or your data in solving the security issue, strong security controls, such as Customer Lockbox for Azure, help ensure you stay in control of your data and the Azure platform remains secure.

    One of those controls is just-in-time administrative access. If Microsoft employees needs access to customer data to resolve an issue, they need to request permission from the customer.

    If permission is granted, the ability to carry out the requested activities is limited to a short period of time. Everything the Microsoft employee does during this time is logged, recorded and made available for future audit. When the authorization period expires, the Microsoft employee no longer has administrative access. Just-in-time administrative access helps make sure that Azure infrastructure and security operations personnel access only what they need to access and for a predefined amount of time.

    In addition to just-in-time administrative access, another control used is the Secure Access Workstation (SAW). All Azure infrastructure and security operators are required to use a SAW when accessing the Azure infrastructure. The SAW is also used in those rare scenarios where a Microsoft employee needs access to customer data to resolve the security issue. These secure workstations are hardened and provide a safe environment from Internet-based attacks for sensitive tasks. These devices provide strong protection from phishing attacks, application and OS vulnerabilities, various impersonation attacks, or credential theft attacks that could put your data and systems at risk.

    3. Fast and expert responses to threats

    Microsoft Azure has a global security incident management team that detects and responds to a wide array of security threats 24/7/365.

    The team follows a five-step incident response process that includes detect, assess, diagnose, stabilize, and close, when managing security incidents for the Azure platform.

    5-step incident response process

    If Microsoft becomes aware that customer data has been accessed by an unauthorized party, the security incident manager will begin the execution of the Customer Security Incident Notification Process.

    The goal of the customer security incident notification process is to provide impacted customers with accurate, actionable, and timely notice of when their customer data was potentially breached. The notices can also help you meet specific legal requirements.

    In order to address security incidents quickly and accurately, the security incident response teams are required to complete technical security and security-awareness training. The technical training is focused on technical software issues that could create a security issue and how to avoid those problems. The security-awareness training is focused on teaching security response professionals how to avoid social and behavioral exploits such as phishing.

    4. Cyber security experts

    Microsoft has more than 3,500 cybersecurity experts that work across teams such as the Cyber Defense Operations Center (CDOC), the Microsoft Digital Criminal Response Center (MDCU) and the Microsoft Security Response Center (MSRC). These security experts act as human intelligence while working with sophisticated, automated processes to detect, respond, and remediate threats.

    The CDOC brings together security response experts across Microsoft to help protect, detect, and respond to security threats against our infrastructure and services 24/7/365. Informed by trillions of data points across an extensive network of sensors, devices, authentication events, and communications, this team employs automated software, machine learning, behavioral analysis, and forensic techniques to create an Intelligent Security Graph. This graph enables us to interpret and act on possible security issues that would have been impossible prior to these advanced technologies.

    The Microsoft Digital Crimes Unit (DCU) fights global malware and reduces digital risk for people all over the world. Our international team of attorneys, investigators, data scientists, engineers, analysts, and business professionals based in 30 countries continuously work together to fight digital crime and help secure your data and applications in Azure. To do this, this team combines big data analytics, cutting-edge forensics, and legal strategies to protect your data and keep you in control of your personal information.

    The MSRC focuses on preventing harm, delivering protection from attacks, and building trust. This team has been engaged with security researchers for more than 20 years to protect customers and the broader partner ecosystem from being harmed from security vulnerabilities, and rapidly repulsing any attack against Microsoft Cloud.

    From the development of software to the thousands of security experts on staff, Microsoft uses a variety of controls to protect your data in Azure.

    To see some of the security teams that work on your behalf in action, watch our latest Microsoft Mechanics video. Start building your infrastructure and applications on the secure foundation Microsoft provides today for free.

    grafana (5.3.4)

    The content below is taken from the original ( grafana (5.3.4)), to continue reading please visit the site. Remember to respect the Author & Copyright.

    The tool for beautiful monitoring and metric analytics & dashboards for Graphite, InfluxDB & Prometheus & More

    Bitfusion Enables Network Attached GPUs on any Virtual Machine Environment with VMWare and Mellanox at SC18

    The content below is taken from the original ( Bitfusion Enables Network Attached GPUs on any Virtual Machine Environment with VMWare and Mellanox at SC18), to continue reading please visit the site. Remember to respect the Author & Copyright.

    Bitfusion , the Elastic Artificial Intelligence (AI) software company, announced a reference solution architecture combining Bitfusion’s FlexDirect… Read more at VMblog.com.

    Microsoft’s Plan to Automatically Email Office 365 Users Is A Rare Disconnect

    The content below is taken from the original ( Microsoft’s Plan to Automatically Email Office 365 Users Is A Rare Disconnect), to continue reading please visit the site. Remember to respect the Author & Copyright.


    Microsoft’s Office 365 is the crown jewel for the company’s software-as-a-service model; the platform has passed 155 million commercial customers and shows little signs of slowing down.

    So when the company announced that they would start automatically emailing users of Office 365 and Microsoft 365 tips and tricks, to get the most out of their subscription, it was a record-scratch moment. Microsoft was planning, in late November, to start emailing users and had begun notifying admins of this practice.

    It may seem harmless to want to help your customers learn more about the product they are paying for but that completely misses the issue. Especially for Microsoft 365 and I’d be willing to be a large portion of the user-base of commercial Office 365, has no idea what either product is; they simply know it as Office and Windows.

    That point aside, emailing your paying customers with what will likely be viewed as spam is a bold move and the feedback has been harsh. So harsh in fact that the company has put a hold on rolling out this practice while it reviews the messages it has received.

    As it should be, Microsoft does not fully know how each company uses each platform and sending content to the end user about how to get the most out of Yammer, when a company may not be using the software, is filling corporate inboxes with junk. And for backend staff, this could cause unnecessary inquiries about software they don’t use which only adds more overhead to the typically understaffed help desk.

    Further, the end user is not the person who decides when and where software is deployed inside of a large company. This is an IT or management specific decision and trying to entice the end user to learn about software their organization does not currently use, will create more headaches for IT.

    I do like that Microsoft is trying to help educate users about features of their software to help improve productivity, this is not the issue, the problem is that it needs to be in a controlled by IT and should be an opt-in, not an opt-out, as it is currently designed. If IT can preview and control the flow of tips to help its users with features they use for products that IT supports, that can be helpful, but to blast everyone with un-targeted content is noise that is not needed.

    This is a rare disconnect for Microsoft as the company is typically extremely careful about touching Office 365 as it has proven to be a pillar of its SaaS future. For now, the company has listened to feedback about emailing its paying customers and if they change their minds again, we will keep you updated.

    The post Microsoft’s Plan to Automatically Email Office 365 Users Is A Rare Disconnect appeared first on Petri.

    Bubble lets you create web applications with no coding experience

    The content below is taken from the original ( Bubble lets you create web applications with no coding experience), to continue reading please visit the site. Remember to respect the Author & Copyright.

    Meet Bubble a bootstrapped startup that has been building a powerful service that lets you create a web application even if you don’t know how to code. Many small and big companies rely on Bubble for their website.

    I have to say I was quite skeptical when I first heard about Bubble. Many startups have already tried to make coding as easy as playing with Lego bricks. But it’s always frustratingly limited.

    Bubble is more powerful than your average website building service. It recreates all the major pillars of web programming in a visual interface.

    It starts with a design tab. You start with a blank canvas and you can create web pages by dragging and dropping visual elements on the screen. You can put elements wherever you want, resize maps, text boxes, images and more. You can click on the preview button to see the development version of your time whenever your want.

    In the second tab, you can create the logic behind your site. It works a bit like Automator on the Mac. You add blocks to create a chronological action. You can set some conditions within each block.

    In the third tab, you can interact with your database. For instance, you can create a sign up page and store profile information in the database. At any time, you can import and export data.

    There are hundreds of plugins that let you accept payments with Stripe, embed a TypeForm, use Intercom for customer support via chat, use Mixpanel, etc. You can also use your Bubble data outside of Bubble. For instance, you can build an iPhone app that relies on your Bubble database.

    Many small companies started using Bubble, and it’s been working fine for some of them. For instance, Plato uses Bubble for all its back office. Qoins and Meetaway run on Bubble. Dividend Finance raised $365 million and uses Bubble.

    The startup takes care of hosting your application for you. Every time you resize your instance as your application gets bigger, you pay more.

    Even though the company never raised any money, it already generates $115,000 in monthly recurring revenue. Bubble is still a small startup, which can be scary for bigger customers. But the company wants to improve the product so that customers don’t see the limitations of Bubble. Now, the challenge is to grow faster than customers’ needs.

    Cloudflare’s privacy-focused 1.1.1.1 service is available on phones

    The content below is taken from the original ( Cloudflare’s privacy-focused 1.1.1.1 service is available on phones), to continue reading please visit the site. Remember to respect the Author & Copyright.

    Cloudflare launched its 1.1.1.1 service in April as a bid to improve privacy and performance for desktop users, and now it's making that technology available to mobile users. The company has released 1.1.1.1 apps for Android and iOS that switch the…

    Microsoft Plans New Migration Tools to Move G Suite to Office 365

    The content below is taken from the original ( Microsoft Plans New Migration Tools to Move G Suite to Office 365), to continue reading please visit the site. Remember to respect the Author & Copyright.


    Moving G Suite to Exchange Online

    Now included in the Microsoft 365 Roadmap, Microsoft is on course to deliver tools to move email, calendar, and contacts from Google G Suite to Office 365 (in reality, to Exchange Online) with an expected availability in the second quarter of 2019.

    It’s hardly a surprise that Microsoft should focus on what might be the only mainstream cloud competitor for Office 365. A case can be argued that these are tools that Microsoft should have had years ago, but perhaps the real reason why Microsoft is making the moves now is that migration from the Exchange on-premises installed base is tailing off (Office 365 is now at 155 million active users). If this is the case, then capacity might be available in Microsoft’s FastTrack organization to take on new challenges. After all, Office 365 needs more fuel to maintain its growth.

    IMAP Migration Already Available

    Microsoft already offers guidance to migrate mailboxes from G Suite to Exchange Online using IMAP and the Exchange Mailbox Migration service, and there are ISV products available to help too. What’s changing is that Microsoft is now going to migrate calendar and contact data, which probably means that they need to use Google’s REST-based APIs (for example, the calendar API) as the now-antique IMAP protocol only handles messages. Moving away from IMAP has a further advantage in that the throttling Google applies on data transfer might not be quite so evident with their own APIs. I expect Microsoft to continue using the mailbox migration service because it gives a convenient way to process and manage batches of user accounts moving to Exchange Online.

    All of this is idle speculation on my part and what’s really happening won’t become clear until Microsoft shows off what they are doing. I asked Greg Taylor, Director of Exchange Marketing at Microsoft (author of the tweet shown above), about the initiative. He said:

    The primary reason we’re doing this is to improve the end to end security of customer data. We want to make sure the customer’s data itself is secured as it moves, that it doesn’t make an unexpected staging stop along the way, and the authentication used to get to it is strong and trustworthy.”

    Reading between the lines, it seems like Microsoft wants to have full charge over the migration of user data from G Suite to Office 365 so that they can assure customers that security is maintained at all times. That perspective makes sense in the context of a world where the integrity of personal data is increasingly regulated (like GDPR).

    Reversing the Trend

    In May 2013, Boston was one of the first major losses Microsoft suffered in the cloud wars when the city opted for Google Apps over what was then a very underdeveloped Office 365. In January 2014, Google celebrated the move of 76,000 employees from on-premises Exchange to its platform, a post that remains one of the headline stories for G Suite in Government.

    Since the Boston loss, I haven’t heard of many other large-scale successes for G Suite over Office 365 outside the education section, where Google has always been very strong. The loss stung Microsoft, and since then they’ve pumped out a steady stream of new features, capabilities, and applications to beef up the Office 365 suite. The base workloads of Exchange and SharePoint are both stronger, Teams, Planner, and advanced applications like Workplace Analytics are available, and the Microsoft online applications are a world away from where they were in 2013.

    It seems like the momentum is with Office 365, so it’s unsurprising that they should now start to crank up the pressure by formally offering comprehensive migration tools for G Suite. One wonders how Google will respond?

    The post Microsoft Plans New Migration Tools to Move G Suite to Office 365 appeared first on Petri.

    reportgenerator.portable (4.0.3.0)

    The content below is taken from the original ( reportgenerator.portable (4.0.3.0)), to continue reading please visit the site. Remember to respect the Author & Copyright.

    ReportGenerator converts XML reports generated by OpenCover, dotCover, Visual Studio, NCover, Cobertura or JaCoCo or into human readable reports in various formats. The reports do not only show the coverage quota, but also include the source code and visualize which line has been covered.