A quick and easy way to set up an end-to-end IoT solution on Google Cloud Platform

The content below is taken from the original ( A quick and easy way to set up an end-to-end IoT solution on Google Cloud Platform), to continue reading please visit the site. Remember to respect the Author & Copyright.

In today’s world of dispersed and perpetually connected devices, many business operations involve receiving and processing large volumes of messages both in batch (typically bulk operations on historical data) and in real-time. In these contexts, the real differentiator is the back end, which must efficiently and reliably handle every request, allowing end users to not only access the data, but also to analyze it and extract useful insights.

Google Cloud Platform (GCP) provides a holistic services ecosystem suitable for these types of workloads. In this article, we will present an example of a scalable, serverless Internet-of-Things (IoT) environment that runs on GCP to ingest, process, and analyze your IoT messages at scale.

Our simulated scenario, in a nutshell

In this post, we’ll simulate a collection of sensors, distributed across the globe, measuring city temperatures. Data, once collected, will be accessible by users who will be able to:

  • Monitor city temperatures in real time

  • Perform analysis to extract insights (e.g. what is the hottest city in the world?)

simulated_scenario.png

As the main user interface of our sample application, a user starts the simulation by clicking the “Start” button. The system will generate all 190 simulated devices (with corresponding city locations) that will start immediately sensing and reporting data. Temperatures of different cities can then be monitored in real time directly through the UI clicking on the button “Update” and selecting the preferred marker. Once completed, the simulation can be turned off by clicking on the “Stop” button.

Data_streamed.png
Data is also streamed in parallel, into a data warehouse solution for additional historical or batch analysis.

Proposed architecture

Proposed_architecture.png

The simulation, as explained above, starts from (1) App Engine that is a fully managed PaaS to implement scalable applications. In the App Engine frontend, the user triggers the generation of (simulated) devices and through APIs calls, the application will generate several instances on (2) Google Compute Engine, the IaaS solution to manage Virtual Machines (VMs). The following steps are executed at the startup of each VM:

  • A public-and-private key pair is generated for each simulated device

  • An instance of a (3) Java application that performs the following actions is launched for each simulated device:

    • registration of the device in Cloud IoT Core, a fully managed service designed to easily and securely connect, manage, and ingest data from globally dispersed devices

    • generation of a series of temperatures for a specified city

    • encapsulation of generated data into MQTT messages to make them available to Cloud IoT Core

Collected messages containing temperature values will then be published to a topic on (4) Cloud Pub/Sub, an enterprise message-oriented middleware. Here messages will be read in streaming mode by (5) Cloud Dataflow, a simplified stream and batch data processing solution, and then ingested into:

  • (1) Cloud Datastore, a highly scalable, fully managed NoSQL database

  • (6) BigQuery, a fast, highly scalable, cost-effective, and fully managed cloud data warehouse for analytics

Cloud Datastore will save data to be displayed directly into the UI of the App Engine application, while BigQuery will act as a data warehouse that will enable the execution of more in depth analysis.

All the logs generated by all components will then be ingested and monitored via (9) Stackdriver, a monitoring and management tool for services, containers, applications, and infrastructure. Permission and access will be managed via (9) Cloud IAM, a fine-grained access control and visibility tool for centrally managing cloud resources.

Deploy the application on your environment

The application, written mostly in Java, is available inthis GitHub repository. Detailed instructions on how to deploy the solution in a dedicated Google Cloud Platform project can be found directly in the repository’s Readme.

Please note that in deploying this solution you may incur some charges.

Next steps

This application is just an example of the multitude of solutions you can develop and deploy on Google Cloud Platform leveraging the vast number of available services. A possible evolution could be the design of a simpleGoogle Data Studio dashboard to visualize some information and temperature trends, or it could be the implementation of a machine learning model that predicts the temperatures.

We’re eager to learn what new IoT features you will implement in your own forks!”

License caps and CCTV among ride-hailing rule changes urged in report to UK gov’t

The content below is taken from the original ( License caps and CCTV among ride-hailing rule changes urged in report to UK gov’t), to continue reading please visit the site. Remember to respect the Author & Copyright.

Uber and similar services could be facing caps on the number of licenses for vehicles that can operate ride-hailing services in London and other UK cities under rule changes being recommended to the government.

CCTV being universally installed inside licensed taxis and private hire vehicles for safety purposes is another suggestion.

A working group in the UK’s Department for Transport has published a report urging a number of changes intended to modernize the rules around taxis and private hire vehicles to take account of app-based technology changes which have piled extra pressure on already long outdated rules.

In addition to suggesting that local licensing authorities should have the power to cap vehicle licenses, the report includes a number of other recommendations that could also have an impact on ride-hailing businesses, such as calling for drivers to be able to speak and write English to a standard that would include being able to deal with “emergency and other challenging situations”; and suggesting CCTV should be universally installed in both taxis and PHVs (“subject to strict data protection measures”) — to mitigate safety concerns for passengers and drivers.

The report supports maintaining the current two-tier system, so keeping a distinction between ‘plying for hire’ and ‘prebooking’, although it notes that technological advancement has “blurred the distinction between the two trades” — and so suggests the introduction of a statutory definition of both.

“This definition should include reviewing the use of technology and vehicle ‘clustering’ as well as ensuring taxis retain the sole right to be hailed on streets or at ranks. Government should convene a panel of regulatory experts to explore and draft the definition,” it suggests.

Legislation for national minimum standards for taxi and PHV licensing — for drivers, vehicles and operators — is another recommendation, though with licensing authorities left free to set additional higher standards if they wish.

The report, which has 34 recommendations in all, also floats the idea that how companies treat drivers, in terms of pay and working conditions, should be taken into account by licensing authorities when they are determining whether or not to grant a license.

The issues of pay and exploitation by gig economy platform operators has risen up the political agenda in the UK in recent years — following criticism over safety and a number of legal challenges related to employment rights, such as a 2016 employment tribunal ruling against Uber. (Its first appeal also failed.)

“The low pay and exploitation of some, but not all, drivers is a source of concern,” the report notes. “Licensing authorities should take into account any evidence of a person or business flouting employment law, and with it the integrity of the National Living Wage, as part of their test of whether that person or business is ‘fit and proper’ to be a PHV or taxi operator.”

UK MP Frank Field, who this summer published a critical report on working conditions for Deliveroo riders, said the recommendations in the working group’s report put Uber “on notice”.

“In my view, operators like Uber will need to initiate major improvements in their drivers’ pay and conditions if they are to be deemed ‘fit and proper’,” he said in a response statement. “The company has been put on notice by this report.”

Though the report’s recommendation on this front do not go far enough for some. Also responding in a statement, the IWGB UPHD’s branch chair, James Farrar — who was one of the former Uber drivers who successfully challenged the company at an employment tribunal — criticized the lack of specific minimum wage guarantees for drivers.

“While the report has some good recommendations, it fails to deal with the most pressing issue for minicab drivers — the chronic violation of minimum wage laws by private hire companies such as Uber,” he said. “By proposing to give local authorities the power to cap vehicle licenses rather than driver licenses, the recommendations risk giving more power to large fleet owners like Addison Lee, while putting vulnerable workers in an even more precarious position.

“Just days after the New York City Council took concrete action to guarantee the minimum wage, this report falls short of what’s needed to tackle the ongoing abuses of companies operating in the so-called ‘gig economy’.”

We’ve reached out to Uber for comment on the report.

Field added that he would be pushing for additional debate in parliament on the issues raised and to “encourage the government to safeguard drivers’ living standards by putting this report into action”.

“In the meantime, individual licensing authorities have an important part to play by following New York’s lead in using their licensing policies to guarantee living wage rates for drivers,” he also said.

London’s transport regulator, TfL, has been lobbying for licensing authorities to be given the power cap the number of private hire vehicles in circulation for several years, as the popularity of ride-hailing has led to a spike in for-hire car numbers on London’s streets, making it more difficult for TfL to manage knock-on issues such as congestion and air quality (which are policy priorities for London’s current mayor).

And while TfL can’t itself (yet) impose an overall cap on PHV numbers it has proposed and enacted a number of regulatory tweaks, such as English language proficiency tests for drivers — changes that companies such as Uber have typically sought to push back against.

Earlier this year TfL also published a policy statement, setting out a safety-first approach to regulating ride-sharing. And, most famously, it withdrew Uber’s licence to operate in 2017.

Though the company has since successfully appealed, after making a number of changes to how it operates in the UK, gaining a provisional 15-month license to operate in London this summer. But clearly any return to Uber’s ‘bad old days‘ would be dealt very short shrift.

In the UK primary legislation would be required to enable local licensing authorities to be able to cap PHV licenses themselves. But the government is now being urged to do so by the DfT’s own working group, ramping up the pressure for it act — though with the caveat that any such local caps should be subject to “a public interest test” to prove need.

“This can help authorities to solve challenges around congestion, air quality and parking and ensure appropriate provision of taxi and private hire services for passengers, while maintaining drivers’ working conditions,” the report suggests.

Elsewhere, the report recommends additional changes to rules to improve access to wheelchair accessible vehicles; beef up enforcement against those that flout rules; as well as to support disability awareness training for drivers.

The report also calls on the government to urgently review the evidence and case for restricting the number of hours that taxi and PHV drivers can drive on the same safety grounds that restrict hours for bus and lorry drivers.

It also suggests a mandatory national database of all licensed taxi and PHV drivers, vehicles and operators, be established — to support stronger enforcement, generally, across all its recommended rule tweaks.

It’s not yet clear how the government will respond to the report, nor whether it will end up taking forward all or only some of the recommendations.

Although it’s under increased pressure to act to update regulations in this area, with the working group critically flagging ministers’ failure to act following a Law Commission review the government commissioned, back in 2011, writing: “It is deeply regrettable that the Government has not yet responded to the report and draft bill which the Commission subsequently published in 2014. Had the government acted sooner the concerns that led to the formation of this Group may have been avoided.”

4 hidden cloud computing costs that will get you fired

The content below is taken from the original ( 4 hidden cloud computing costs that will get you fired), to continue reading please visit the site. Remember to respect the Author & Copyright.

John just finished the first wave on cloud workload migrations for his company. With a solid 500 applications and related data sets migrated to a public cloud, he now has a good understanding of what the costs are after these applications have moved into production.

However, where John had budgeted $1 million a month for ops costs, all in, the company is now getting dinged for $1.25 million. Where does that $250,000 go each month? And more concerning, how were those costs missed with the original cost estimates? Most important, where will John work now once the CFO and CEO get wind of the overruns?

In my consulting work, I’m seeing the same four missed costs over and over.

To read this article in full, please click here

Realizing the Internet of Value Through RippleNet

The content below is taken from the original ( Realizing the Internet of Value Through RippleNet), to continue reading please visit the site. Remember to respect the Author & Copyright.

People and businesses increasingly expect everything in their lives to move at the speed of the web. Conditioned by smartphone and apps where nearly anything is attainable at the press of a button, these customers are often left wanting when it comes to their experiences with money and financial service providers.

This is because an aging payments infrastructure designed more than four decades ago leads to expensive transactions that can take days to settle with little visibility or certainty as to their ultimate success. This experience runs contrary to customer expectations for an Internet of Value, where money moves as quickly and seamlessly as information.

Delays, costs and opacity are especially true for international transactions. Not only are payments limited by inherent infrastructure challenges, but they must navigate a patchwork quilt of individual country and provider networks stitched together for cross-border transactions. This network of networks effectively stops and starts a transaction every time it encounters a new country, currency or provider network – adding even more costs and delays.

As a global payments network, RippleNet creates a modern payments experience operating on standardized rules and processes for real time settlement, more affordable costs, and end-to-end transaction visibility. It allows banks to better compete with FinTechs that are siphoning off customers disappointed by traditional transaction banking services.

RippleNet brings together a robust ecosystem of players for the purposes of powering the Internet of Value. This network is generally made up of banks and payment providers that source liquidity and process payments, as well as corporates and FinTechs that use RippleNet to send payments.

For network members, RippleNet offers:

Access: Today, banks and providers overcome a fragmented global payments system by building multiple, custom transaction relationships with individual networks. By joining RippleNet’s single worldwide network of institutions, organizations gain a single point of access to a standardized, decentralized infrastructure for consistency across all global connections.

Certainty: Legacy international payments cannot provide clarity around transaction timing or costs, and many transactions ultimately end in failure. RippleNet’s atomic pass-fail processing ensures greater certainty in delivery, and its bi-directional messaging capability provides unprecedented end-to-end transaction visibility for fees, delivery time and status.

Speed: Disparate networks and rules create friction and bottlenecks that slow down a transaction. RippleNet’s pathfinding capabilities cut through the clutter by identifying optimal routes for transactions that then settle instantly. With RippleNet, banks and providers can reduce transaction times from days to mere seconds.

Savings: Existing payment networks have high processing and liquidity provisioning costs that result in fees as high as $25 or $35 per transaction. RippleNet’s standardized rules and network-wide connectivity significantly lower processing costs. RippleNet also lowers liquidity provisioning costs or can eliminate the need for expensive nostro accounts altogether through the use its digital asset XRP for on-demand liquidity. The end result is a dramatically lower cost of transactions for providers and their customers.

RippleNet solves for the inefficiencies of the world’s payment systems through a single, global network. With RippleNet, banks and payment providers can realize the promise of the Internet of Value, meeting customer expectations for a modern, seamless global payments experience while lowering costs and opening new lines of revenue.

For more information on the technology behind RippleNet or to learn how to join contact us.

The post Realizing the Internet of Value Through RippleNet appeared first on Ripple.

Prep for Cisco, CompTIA, and More IT Certifications With This $39 Bundle

The content below is taken from the original ( Prep for Cisco, CompTIA, and More IT Certifications With This $39 Bundle), to continue reading please visit the site. Remember to respect the Author & Copyright.

Large companies need to maintain a robust IT infrastructure if they want to thrive in the digital age, and they can’t accomplish this without certified IT professionals. Luckily, traditional schooling isn’t necessary to land an IT job; IT professionals simply need to pass their certification exams, and they can do so thanks to the wealth of training courses available. One such resource is this Ultimate IT Certification Training Bundle, which is currently on sale for $39.

To read this article in full, please click here

Deep dive into Azure Test Plans

The content below is taken from the original ( Deep dive into Azure Test Plans), to continue reading please visit the site. Remember to respect the Author & Copyright.

Azure Test Plans, a service launched with Azure DevOps earlier this month, provides a browser-based test management solution for exploratory, planned manual, and user acceptance testing. Azure Test Plans also provides a browser extension for exploratory testing and gathering feedback from stakeholders.

Manual and exploratory testing continue to be important techniques for evaluating quality of a product/service, alongside the DevOps emphasis on automated testing. In modern software development processes, everybody in the team contributes to or owns quality – including developers, testers, managers, product owners, user experience advocates, and more. Azure Test Plans addresses all these needs. Let’s take a closer look.

Note: For automated testing as part of your CI/CD workflow, consider leveraging Azure Pipelines. It provides mechanisms for continuous build, test, and deployment to any platform and cloud.

Testing is integral to DevOps and Agile teams

A common practice is to base tests on user stories, features, or scenarios that are managed on a Kanban board as in Azure Boards. With Azure Test Plans, a team can leverage manual testing right from within their Kanban board. This provides end-to-end traceability because tests and defects are automatically linked to the requirements and builds being tested, which also helps you track the quality of the requirements.

Add, view, and interact with test cases directly from the cards on the Kanban board, and progressively monitor status directly from the card. Developers and testers can use this capability to maximize quality within their teams.

Testing in Kanban board

Quality is a team sport through exploratory testing

Exploratory testing is an approach to software testing that is described as simultaneous learning, test design and test execution. It complements planned testing by being completely unscripted yet being driven by themes/tours. Quality becomes a shared responsibility as exploratory testing can be leveraged by all team members including developers, testers, managers, product owners, user experience advocates, and more. Watch a short video of how this works.

The Test & Feedback extension enables exploratory testing techniques in Azure Test Plans. It allows you to spend more time finding issues, and less time filing them. Using the extension is simple:

  • Capture your findings along with rich diagnostic data. This includes comments, screenshots with annotations, and audio/video recordings that describe your findings and highlight issues. In the background, the extension captures additional information such as user actions via image action log, page load data, and system information about the browser, operating system, and more that later help in debugging or reproducing the issue.
  • Create work items such as bugs, tasks, and test cases from within the extension. The captured information automatically becomes part of the filed work item and helps with end-to-end traceability.
  • Collaborate with your team by sharing your findings. Export your session report or connect to Azure Test Plans for a fully integrated experience.

Exploratory testing session in progress

The extension also helps in soliciting feedback from stakeholders who may reside outside the development team, such as marketing, sales teams, and others. Feedback can be requested from these stakeholders on user stories and features. Stakeholders can then respond to feedback requests – not just to rate and send comments, but also file bugs and tasks directly. Read more in our documentation.

Feedback requests on a Stakeholder

Planned manual testing for larger teams

Testing from within the Kanban board suffices when your testing needs are simple. However, for larger teams with more complex needs such as creating and tracking all testing efforts within a test plan scope, testing across multiple configurations, distributing the tests across multiple testers, tracking the progress against the test plan, etc., you need a full-scale test management solution and Azure Test Plans fulfils this need. 

Planned manual testing

Planned manual testing in Azure Test Plans lets you organize tests into test plans and test suites. Test suites can be dynamic (requirements-based-suites and query-based-suites) to help you understand the quality of associated requirements under development, or static to help you cover regression tests. Tests can be authored using an Excel-like grid view or other means available. Testers execute tests assigned to them using a runner to test your app(s). The runner can execute in a browser or as a client on your desktop, enabling you to test on any platform or test any app. During execution, rich diagnostic data is collected to help with debugging or reproducing the issue later. Bugs filed during the process automatically include the captured diagnostic data.

Test execution with rich data capture

To track overall progress and outcomes, leverage lightweight charts, which can be pinned to your dashboard for easy monitoring. Watch a video showing planned manual testing in Azure Test Plans.

Charts to help track progress and outcomes

We hope this post gives you a quick peek into what Azure Test Plans can do for you – we recommend trying it out for free to learn more and to maximize quality for your software. Happy exploring and testing!

Further information

Want to learn more? See our documented best practices, videos, and other learning materials for Azure Test Plans.

Cloudflare’s new ‘one-click’ DNSSEC setup will make it far more difficult to spoof websites

The content below is taken from the original ( Cloudflare’s new ‘one-click’ DNSSEC setup will make it far more difficult to spoof websites), to continue reading please visit the site. Remember to respect the Author & Copyright.

Bad news first: the internet is broken for a while. The good news is that Cloudflare thinks it can make it slightly less broken.

With “the click of one button,” the networking giant said Tuesday, its users can now switch on DNSSEC in their dashboard. In doing so, Cloudflare hopes it removes a major pain-point in adopting the web security standard, which many haven’t set up — either because it’s so complicated and arduous, or too expensive.

It’s part of a push by the San Francisco-based networking giant to try to make the pipes of the internet more secure — even from the things you can’t see.

For years, you could open up a website and take its it’s instant availability for granted. DNS, which translates web addresses into computer-readable IP addresses, has been plagued with vulnerabilities, making it easy to hijack any step of the process to surreptitiously send users to fake or malicious sites.

Take two incidents in the past year — where traffic to and from Amazon and separately Google, Facebook, Apple, and Microsoft were hijacked and rerouted for between minutes and hours at a time. Terabytes of internet traffic were siphoned through Russia for reasons that are still unknown. Any non-encrypted traffic was readable, at least in theory, by the Russian government. Suspicious? It was.

That’s where a security-focused DNS evolution — DNSSEC — is meant to help. It’s like DNS, but it protects requests end-to-end, from computer or mobile device to the web server of the site you’re trying to visit, by cryptographically signing the data so that it’s far tougher — if not impossible — to spoof.

But DNSSEC adoption is woefully low. Just three percent of websites in the Fortune 1000 sign their primary domains, largely because the domain owners can’t be bothered, but also because their DNS operators either don’t support it or charge exorbitant rates for the privilege.

Cloudflare now wants to do the hard work in setting those crucial DS records, a necessary component in setting up DNSSEC, for customers on a supported registrar. Traditionally, setting a DS record has been notoriously difficult, often because the registrars themselves can be problematic.

As of launch, Gandi will be the first registrar to support one-click DNSSEC setup, with more expected to follow.

The more registrars that support the move, the fewer barriers to a safer internet, the company argues. Right now, the company says that services that users should consider switching from providers don’t support DNSSEC and “let them know that was the reason for the switch.”

Just like HTTPS was slow to adopt over the years — but finally took off in 2015 — there’s hope that DNSSEC can follow the same fate. The more companies that adoption the technology will help end users be less vulnerable to DNS attacks on the internet.

And besides the hackers, who doesn’t want that?

Scale Computing, APC partner to offer micro data center in a box

The content below is taken from the original ( Scale Computing, APC partner to offer micro data center in a box), to continue reading please visit the site. Remember to respect the Author & Copyright.

Hyperconverged infrastructure (HCI) vendor Scale Computing and power management specialist APC (formerly American Power Conversion, now owned by Schneider Electric) have partnered to offer a range of turnkey micro data centers for the North American market.

The platform combines Scale’s hyperconverged software, HC3 HyperCore, running on top of its own hardware and built on APC’s ready-to-deploy racks for a micro data center. Micro will sell the platform as a single SKU.

The pre-packaged platform is entirely turnkey, with automated virtualization, power management resources, and built-in redundancy. This makes it well-suited for remote edge locations, such as cell phone towers, where staff is not immediately available to maintain the equipment.

To read this article in full, please click here

Tandem CEO will tell you why building a bank is hard at Disrupt Berlin

The content below is taken from the original ( Tandem CEO will tell you why building a bank is hard at Disrupt Berlin), to continue reading please visit the site. Remember to respect the Author & Copyright.

Challenger banks, neobanks or digital-only banks… Whatever we choose to call them, Europe — and the U.K. in particular — has more than its fair share of bank upstarts battling it out for a slice of the growing fintech pie. One of those is Tandem, co-founded by financial technology veteran Ricky Knox, who we’re excited to announce will join us at TechCrunch Disrupt Berlin.

Tandem — or the so-called “Good Bank” — has been on quite a journey this year. Most recently the bank launched a competitive fixed savings product, pitting it against a whole host of incumbent and challenger banks. It followed the launch of the Tandem credit card in February, which competes well on cash-back and FX rates when spending abroad.

Both products are part of a wider strategy where, like many other consumer-facing fintechs, Tandem wants to become your financial control centre and connect you to and offer various financial services. These are either products of its own or through partnerships with other fintech startups and more established providers.

At the heart of this is the Tandem mobile app, which acts as a Personal Finance Manager (PFM), including letting you aggregate your non-Tandem bank account data from other bank accounts or credit cards you might have, in addition to managing any Tandem products you’ve taken out. The company recently acquired fintech startup Pariti to beef up its account aggregation features.

However, what makes Tandem’s recent progress all the more interesting is that it comes after a definite bump in the road last year. This saw the company temporarily lose its banking license and forced to make lay-offs following the partial collapse of a £35 million investment round from department store House of Fraser, due to restrictions on capital leaving China. The remedy was further investment from existing backers and the bold move to acquire Harrods Bank, the banking arm of the U.K.’s most famous luxury department store.

As you can see, there is plenty to talk about. And some. So, why not grab your ticket to Disrupt Berlin to listen to the Tandem story. The conference will take place on November 29-30.

In addition to fireside chats and panels, like this one, new startups will participate in the Startup Battlefield Europe to win the highly coveted Battlefield cup.


Ricky Knox

CEO & Co-Founder, Tandem

Ricky is a serial investor and entrepreuner. He has built five technology disruptors in fintech and telecoms, each of which also does a bit of good for the world.

Before Tandem he founded Azimo and Small World, two remittance businesses, and is managing partner of Hexagon Partners, a private equity firm. He built Tandem to be a digital bank that helps improve customers’ lives with money.

Ricky has a first class degree from Bristol University and an MBA from INSEAD.

Titanium now available with 128GB storage

The content below is taken from the original ( Titanium now available with 128GB storage), to continue reading please visit the site. Remember to respect the Author & Copyright.

Now that’s a solid state offer, if ever I saw one! From now until 8th October, 2018, Elesar Ltd‘s Titanium motherboard is available to buy with a jolly useful optional extra – a SanDisk 128GB solid state drive (SSD), pre-loaded with a standard RISC OS 5 disc image. The company doesn’t sell the motherboard as […]

When To Use A Multi-Cloud Strategy

The content below is taken from the original ( When To Use A Multi-Cloud Strategy), to continue reading please visit the site. Remember to respect the Author & Copyright.

There are both advantages and disadvantages of multi-cloud environments, and knowing when to use a multi-cloud strategy could depend on how minimizing your dependency on a single provider could affect costs, performance, and security. Before discussing these advantages and disadvantages in greater detail, it is best to first clarify what a multi-cloud environment is.

What is a multi-cloud environment?

For the purposes of this article, a multi-cloud environment is one in which you use two or more public cloud services from two or more cloud service providers. For example, you might use Azure in the US and Alibaba in Asia to avoid latency issues. Alternatively, you may find Google better for development and testing, and AWS preferable for running your production environment.

According to the 2016 IDC CloudView Survey, more than half of businesses using AWS also use another public cloud service provider (for information about mixing private and public clouds, see our blog “When to Use a Hybrid Cloud Strategy”). The market research company Research and Markets predicts businesses using multi-cloud environments will increase by approximately 30% per year.

The advantages of a multi-cloud environment

As well as choosing a multi-cloud strategy to avoid latency, and for development and testing in an isolated environment, businesses distribute their resources between cloud service providers for a number of reasons:

Cherry-pick services

No single cloud service provider has the best tools for everything and, by using multiple cloud service providers, you can cherry-pick the best services from each. For example, if you build apps using the Watson AI platform that need to integrate with Microsoft products, you would use both IBM and Azure.

Improved disaster recovery

Similarly, no single cloud service provider has avoided a major outage. By using two or more providers, your infrastructure becomes more resilient and you could, if you wish, keep replicas of your applications in two separate clouds so that, if one cloud service provider goes down, you don’t.

Potential negotiating power

Competition between major cloud service providers means that, if you are a high-volume customer (a million dollars or more per year), you may be in a position to negotiate lower prices. Distributing your business between providers can give you some leverage in your negotiations.

Less single-vendor dependency

Depending on one provider for any product or service can be risky. Not only might they suffer an outage, but their service levels could decline or—unlikely as it may seem—their prices could go up. By not putting all your eggs in one basket, you are minimizing the risk of your own business suffering.

The disadvantages of a multi-cloud environment

Although many businesses have decided that now is the time to be using a multi-cloud strategy, some are looking at the disadvantages and have concerns about how they might overcome them.

Managing costs and loss of discounts

If you are currently using a single cloud service provider and having difficulty managing costs, imagine how much trouble you may have with two or three providers. Certainly, by diluting your cloud deployments, you will also be diluting the discounts you are entitled to.

Performance challenges

Working with multiple cloud service providers also creates challenges with regard to having developers with the right skill sets to maximize the opportunities. Unless you have the right people in the right place, the resources you have deployed in the cloud may not work as well as they might do.

Increased security risk

Moving to a public cloud gives you less control over your data. Moving to two public clouds gives you even less, plus gives your applications a larger attack surface. There are tools to help secure multi-cloud environments, but you generally have to exercise a greater level of diligence.

Multi-cloud management

Managing costs is not the only thing you have to worry about in a multi-cloud environment. Managing all your assets can be very complicated. Fortunately, there are some very good cloud management platforms with multi-cloud support to help you overcome this challenge. Management of multiple cloud providers might require a change in your existing processes and skill sets.

When to use a multi-cloud strategy

Weighing up the advantages and disadvantages of multi-cloud environments, there are compelling cases for using a multi-cloud strategy, but when? At CloudHealth Technologies, we would suggest:

  • When there are operational advantages due to a wider choice of services.
  • When unscheduled downtime would severely disrupt your business.
  • When you have the right people in place to take advantage of the opportunities.
  • When you have a solution in place to help manage costs, performance and security.
  • When you have a global group of developers and you want to push resource efficiencies to them.

Want to learn more? Download the ebook 10 Frequently Asked Questions About Multi-cloud.

The post When To Use A Multi-Cloud Strategy appeared first on Cloud Academy.

Best GitHub Alternatives for hosting your open source project

The content below is taken from the original ( Best GitHub Alternatives for hosting your open source project), to continue reading please visit the site. Remember to respect the Author & Copyright.

Github is the most popular web-based, open source version control system used by developers to host their codes. The website provides a platform to easily collaborate with other programmers on the project. The Github is one of the best available […]

This post Best GitHub Alternatives for hosting your open source project is from TheWindowsClub.com.

Vtrus launches drones to inspect and protect your warehouses and factories

The content below is taken from the original ( Vtrus launches drones to inspect and protect your warehouses and factories), to continue reading please visit the site. Remember to respect the Author & Copyright.

Knowing what’s going on in your warehouses and facilities is of course critical to many industries, but regular inspections take time, money, and personnel. Why not use drones? Vtrus uses computer vision to let a compact drone not just safely navigate indoor environments but create detailed 3D maps of them for inspectors and workers to consult, autonomously and in real time.

Vtrus showed off its hardware platform — currently a prototype — and its proprietary SLAM (simultaneous location and mapping) software at TechCrunch Disrupt SF as a Startup Battlefield Wildcard company.

There are already some drone-based services for the likes of security and exterior imaging, but Vtrus CTO Jonathan Lenoff told me that those are only practical because they operate with a large margin for error. If you’re searching for open doors or intruders beyond the fence, it doesn’t matter if you’re at 25 feet up or 26. But inside a warehouse or production line every inch counts and imaging has to be carried out at a much finer scale.

As a result, dangerous and tedious inspections, such as checking the wiring on lighting or looking for rust under an elevated walkway, have to be done by people. Vtrus wouldn’t put those people out of work, but it might take them out of danger.


The drone uses depth-sensing both to build the map and to navigate and avoid obstacles.

The drone, called the ABI Zero for now, is equipped with a suite of sensors, from ordinary RGB cameras to 360 ones and a structured-light depth sensor. As soon as it takes off, it begins mapping its environment in great detail: it takes in 300,000 depth points 30 times per second, combining that with its other cameras to produce a detailed map of its surroundings.

It uses this information to get around, of course, but the data is also streamed over wi-fi in real time to the base station and Vtrus’s own cloud service, through which operators and inspectors can access it.

The SLAM technique they use was developed in-house; CEO Renato Moreno built and sold a company (to Facebook/Oculus) using some of the principles, but improvements to imaging and processing power have made it possible to do it faster and in greater detail than before. Not to mention on a drone that’s flying around an indoor space full of people and valuable inventory.

On a full charge, ABI can fly for about 10 minutes. That doesn’t sound very impressive, but the important thing isn’t staying aloft for a long time — few drones can do that to begin with — but how quickly it can get back up there. That’s where the special docking and charging mechanism comes in.

The Vtrus drone lives on and returns to a little box, which when a tapped-out craft touches down, sets off a patented high-speed charging process. It’s contact-based, not wireless, and happens automatically. The drone can then get back in the air perhaps half an hour or so later, meaning the craft can actually be in the air for as much as six hours a day total.

Probably anyone who has had to inspect or maintain any kind of building or space bigger than a studio apartment can see the value in getting frequent, high-precision updates on everything in that space, from storage shelving to heavy machinery. You’d put in an ABI for every X square feet depending on what you need it to do; they can access each other’s data and combine it as well.

The result of a quick pass through a facility. Obviously this would make more sense if you could manipulate it in 3D, as the operator would.

This frequency and the detail which which the drone can inspect and navigate means maintenance can become proactive rather than reactive — you see rust on a pipe or a hot spot on a machine during the drone’s hourly pass rather than days later when the part fails. And if you don’t have an expert on site, the full 3D map and even manual drone control can be handed over to your HVAC guy or union rep.

You can see lots more examples of ABI in action at the Vtrus website. Way too many to embed here.

Lenoff, Moreno, and third co-founder Carlos Sanchez, who brings the industrial expertise to the mix, explained that their secret sauce is really the software — the drone itself is pretty much off the shelf stuff right now, tweaked to their requirements. (The base is an original creation, of course.)

But the software is all custom built to handle not just high-resolution 3D mapping in real time but the means to stream and record it as well. They’ve hired experts to build those systems as well — the 6-person team already sounds like a powerhouse.

The whole operation is self-funded right now, and the team is seeking investment. But that doesn’t mean they’re idle: they’re working with major companies already and operating a “pilotless” program (get it?). The team has been traveling the country visiting facilities, showing how the system works, and collecting feedback and requests. It’s hard to imagine they won’t have big clients soon.

😤Angry London bike thief in slippers gets trapped by ‘Brakes’ Lorry driver 🤣 **MUST SEE**

[https://youtu.be/xUcRzx-HvlY], Bike thief gets stopped!

Don’t forget to like and SUBSCRIBE!!! !!! !!!

Stop Edge from hijacking your PDF/HTML file associations

The content below is taken from the original ( Stop Edge from hijacking your PDF/HTML file associations), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft Edge is set as the default PDF reader to open and view PDF files in Windows. So, whenever I attempt to open any PDF file in Windows 10, it automatically gets opened in Edge browser, although my preferred choice […]

This post Stop Edge from hijacking your PDF/HTML file associations is from TheWindowsClub.com.

Acorn World exhibition in Cambridge – 8th and 9th September

The content below is taken from the original ( Acorn World exhibition in Cambridge – 8th and 9th September), to continue reading please visit the site. Remember to respect the Author & Copyright.

The Centre for Computing History, a computing museum based in Cambridge, will be playing host to an event this coming weekend that should be of interest to any and all fans of Acorn Computers: Acorn World 2018. Organised by the Acorn and BBC User Group (ABUG) in association with the museum, the event will run […]

This Instagram page draws famous buildings, showing off the sketch process via timelapse

The content below is taken from the original ( This Instagram page draws famous buildings, showing off the sketch process via timelapse), to continue reading please visit the site. Remember to respect the Author & Copyright.

Sam Picardal is a New York based artist and architectural illustrator running the Instagram instagram account @21.am, which shows off his sketching process for drawing buildings by famous architects. Capturing his practice via timelapse, the page posts videos of Picardal hand drawing landmark works such as the Art Gallery of Alberta by Frank Gehry, The Tower Bridge in London, Hadid‘s Heydar Aliyev Cultural Center, and even the Millennium Falcon. Entire cityscapes are given the pen-and-ink treatment as well, as are various spacecrafts and robotics.

ClearCube Launches New C3Pi+ Raspberry Pi 3 Model B+ Thin Client at VMworld 2018

The content below is taken from the original ( ClearCube Launches New C3Pi+ Raspberry Pi 3 Model B+ Thin Client at VMworld 2018), to continue reading please visit the site. Remember to respect the Author & Copyright.

ClearCube Technology, Inc. announced the launch of the new C3Pi+ Thin Client at VMworld 2018 US in Las Vegas on August 28, 2018. The low-cost,… Read more at VMblog.com.

This funky new font is made up entirely of brands

The content below is taken from the original ( This funky new font is made up entirely of brands), to continue reading please visit the site. Remember to respect the Author & Copyright.

A digital studio called Hello Velocity has created a typeface that embraces well-known corporate logos and is still somehow far less annoying than Comic Sans. The studio says it creates "thought-provoking internet experiences," and its Brand New Roma…

RippleNet Offers SMEs a Competitive Advantage in Global Payments

The content below is taken from the original ( RippleNet Offers SMEs a Competitive Advantage in Global Payments), to continue reading please visit the site. Remember to respect the Author & Copyright.

While global business may move at the speed of the web, international payments instead seem to move like smoke signals. This is because the world’s payments infrastructure hasn’t changed since the heady days of disco, nearly four decades ago. Especially for teams at small and medium enterprises (SMEs), this disconnect between the hustle of business today and inertia old infrastructure creates unnecessary hurdles and impediments to smooth business operations.

Today’s system of moving money around the world forces banks and payment providers to plan for days of delay, produce their own liquidity by funding accounts in local currencies on each side of a transaction, and pass along exorbitant costs to their customers. For SMEs, this translates into a cumbersome payments experience with high fees, limited visibility into transaction details or status, and a settlement time that can stretch from days into weeks.

In contrast, RippleNet delivers a new global payments standard that speeds up transactions, introduces certainty, and lowers fees to transform the cross-border payments experience. Using RippleNet, banks and payment providers can reimagine a payment from invoice to confirmed settlement for their clients. Just one small change, like the ability to drag-and-drop invoices as part of a RippleNet powered transaction, can have many benefits: it saves time with pre-populated fields, automatically confirms recipients for accuracy, and obtains real-time quotes.

The end result is a vastly improved user experience for a transaction delivered in seconds and with confidence, at a fraction of the usual price.

Emerging Markets Close the Gap with Ripple  

This sea change in international payments is even more important when you consider the World Bank forecasts global remittance payments to grow by 3.4 percent or roughly $466 billion in 2018. Much of this will happen in emerging markets, which are home to 85 percent of the global population and account for almost 60 percent of global GDP, with India and China having the highest incoming flows in 2017.

Designed to solve the modern challenge of international or cross-border transactions, RippleNet is already having an impact in these emerging markets. With a growing global need not just for access, but also a more efficient, transparent and cost-effective payments into and out of emerging markets, new financial institutions in India, Brazil and China have joined RippleNet to power instant remittance payments into their countries.

SMEs also play a critical economic role in these emerging markets. RippleNet delivers efficiencies and advantages that can help them be more competitive in an interconnected world.

Rather than be limited by borders and currencies, Ripple connects all parties in a global transaction through a single seamless, frictionless experience. Built for the Internet age, Ripple delivers access, speed, certainty and savings. And by leveraging the most advanced blockchain technology possible, it is scalable, secure and interoperable.

For more information on Ripple’s solutions or to learn how to join RippleNet contact us.

The post RippleNet Offers SMEs a Competitive Advantage in Global Payments appeared first on Ripple.

How to keep track of the Total Editing Time spent on a Microsoft Word document

The content below is taken from the original ( How to keep track of the Total Editing Time spent on a Microsoft Word document), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft Word was designed with the purpose to enable its users to type and save documents. In addition to this utility, it has a feature that keeps a count of the amount of time spent on a document. Normally, you […]

This post How to keep track of the Total Editing Time spent on a Microsoft Word document is from TheWindowsClub.com.

Monitor all Azure Backup protected workloads using Log Analytics

The content below is taken from the original ( Monitor all Azure Backup protected workloads using Log Analytics), to continue reading please visit the site. Remember to respect the Author & Copyright.

We are excited to share that Azure Backup now allows you to monitor all workloads protected by it by leveraging the power of Log Analytics (LA). This allows enterprises to monitor key backup parameters across Recovery Services vaults and subscriptionsirrespective of which Azure backup solution you are using. In addition, configure custom alerts and actions for custom monitoring requirements for all Azure Backup workloads with this LA based solution.

This solution now covers all workloads protected by Azure Backupincluding Azure VMs, SQL in Azure VM backups, System Center Data Protection Manager connected to Azure (DPM-A), Microsoft Azure Backup Server (MABS), and file-folder backup from Azure backup agent.

Here’s how you get all the benefits.

Configure diagnostic settings

If you have already configured Log Analytics workspace to monitor Azure Backup, skip to the Deploy solution template section.

You can open the diagnostic setting window from the Azure Recovery services vault or from Azure Monitor. In the Diagnostic settings window, select “Send data to log analytics,” choose the relevant LA workspace and select the log accordingly, “AzureBackupReport,” and click “Save.”

Be sure to choose the same workspace for all the vaults so that you get a centralized view in the workspace. After completing the configuration, allow24 hours for initial data push to complete.

Deploy solution template

Once the data is in the workspace, we need a set of graphs to visualize the monitoring data. Deploy the Azure quick-start template to the workspace configured above to get a default set of graphs, explained below. Make sure you give the same resource group, workspace name and workspace location to properly identify the workspace and then install this template on it.

If you are already using this template as outlined in a previous blog and edited it, just add the relevant kusto queries from deployment JSON in github. If you didn’t edit the template, re-deploy the template onto the same workspace to view the updated template.

Once deployed, you will view an overview tile for Azure Backup in the workspace dashboard. Clicking on the overview tile will take you to the solution dashboard and provide you all the information shown below.

AzureMonitorTile

Monitor Azure Backup data

Monitor backups and restores

Monitor regular daily backups for all Azure Backup protected workloads. With this update, you can monitor even log backups for your SQL Databases whether they are running within Azure IaaS VMs or being run locally on-premises and being protected by DPM, MABS.

AllBackupJobs

RestoreJobs

Monitor all datasources

Monitor a spike or reduction in number of backed up datasources using the active datasources graph. The active datasources attribute is split across all Azure Backup types. The legend beside the pie graph shows the top three types. The list beneath the pie chart displays the top 10 active datasources. For example, datasources on which the greatest number of jobs were run in the specified time frame.

ActiveDatasources

Monitor Azure Backup alerts

Azure Backup generates alerts automatically when a backup and/or a restore job fails. You are now able to view all such alerts generated in a single place.

ActiveAlerts

However, be sure to select the relevant time range to monitor, such as the proper start and end dates.

SelectTime

Generate custom alerts

Whenever you click on any single row in the above graphs, it will lead to a more detailed view in the Log Search window and you can generate a custom alert for that scenario.

CustomAlert

To learn more, visit our documentation on how to configure alerts.

Summary

You can configure LA workspaces to receive key backup data across multiple Recovery Services vaults and subscriptions and deploy customizable solutions on workspaces to view and configure actions for business-critical events. This solution is key for any enterprise to keep a watchful eye over their backups and ensure that all actions are taken for successful backups and restores.

Related links and additional content

Microsoft removes device install limits for Office 365 subscribers

The content below is taken from the original ( Microsoft removes device install limits for Office 365 subscribers), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft is removing limits on the number of devices on which some Office 365 subscribers can install the apps. From October 2nd, Home users will no longer be restricted to 10 devices across five users nor will Personal subscribers have a limit of o…

AzurePowerShell (6.8.1)

The content below is taken from the original ( AzurePowerShell (6.8.1)), to continue reading please visit the site. Remember to respect the Author & Copyright.

Azure PowerShell provides a set of cmdlets that use the Azure Resource Manager model for managing your Azure resources.

Proving that Teams Retention Policies Work

The content below is taken from the original ( Proving that Teams Retention Policies Work), to continue reading please visit the site. Remember to respect the Author & Copyright.

Teams Splash

Teams Splash

Teams Retention

In April 2018, Microsoft introduced support for Teams as a workload processed by Office 365 retention policies. The concept is simple. Teams captures compliance records for channel conversations and private chats in Exchange Online mailboxes, including messages from guest and hybrid users.

When you implement a Teams retention policy, the Exchange Managed Folder Assistant (MFA) processes the mailboxes to remove the compliance records based on the policy criteria. The creation date for a compliance record in a mailbox is used to its assess age for retention purposes. Given that Office 365 creates compliance records very soon after users post messages in Teams, the creation date for a compliance record closely matches the original message in Teams.

A background synchronization process replicates the deletions to the Teams data service on Azure, and eventually the deletions show up in clients.

A Pragmatic Implementation

If you were to design a retention mechanism from scratch, you might not take the same approach. However, the implementation is pragmatic because it takes advantage of existing components, like MFA. The downside is that because so many moving parts exist, it’s hard to know if a retention policy is having the right effect.

Setting a Baseline

Before a retention policy runs against a mailbox, we need to understand how many Teams compliance items it contains. This command tells us how many compliance items exist in a mailbox (group or personal) and reveals details of the oldest and newest items in the “Team Chat” folder.

Get-MailboxFolderStatistics -Identity "HydraProjectTeam" -FolderScope ConversationHistory -IncludeOldestAndNewestItems | ? {$_.FolderType -eq “TeamChat”} | ft name, Itemsinfolder, Newestitemreceiveddate, Oldestitemreceiveddate

Name      ItemsInFolder NewestItemReceivedDate OldestItemReceivedDate
----      ------------- ---------------------- ----------------------
Team Chat           227 2 Aug 2018 16:10:41    11 Mar 2017 15:41:34

MFA Processes a Mailbox

After you create a Teams retention policy, Office 365 publishes details of the policy to Exchange Online and the Managed Folder Assistant begins to process mailboxes against the policy. MFA operates on a workcycle basis, which means that it tries to process every mailbox in a tenant at least once weekly. Mailboxes with less than 10 MB of content are not processed by MFA because it’s unlikely that they need the benefit of a retention policy. This is not specific to Teams, it’s just the way that MFA works.

When MFA processes a mailbox, it updates some mailbox properties with details of its work. We can check these details as follows:

$Log = Export-MailboxDiagnosticLogs -Identity HydraProjectTeam -ExtendedProperties
$xml = [xml]($Log.MailboxLog)
$xml.Properties.MailboxTable.Property | ? {$_.Name -like "ELC*"}

Name                                    Value
----                                    -----
ElcLastRunTotalProcessingTime           1090
ElcLastRunSubAssistantProcessingTime    485
ElcLastRunUpdatedFolderCount            33
ElcLastRunTaggedFolderCount             0
ElcLastRunUpdatedItemCount              0
ElcLastRunTaggedWithArchiveItemCount    0
ElcLastRunTaggedWithExpiryItemCount     0
ElcLastRunDeletedFromRootItemCount      0
ElcLastRunDeletedFromDumpsterItemCount  0
ElcLastRunArchivedFromRootItemCount     0
ElcLastRunArchivedFromDumpsterItemCount 0
ELCLastSuccessTimestamp                 02/08/2018 16:46:23
ElcFaiSaveStatus                        SaveSucceeded
ElcFaiDeleteStatus                      DeleteNotAttempted

Unfortunately, the MFA statistics don’t tell us how many compliance records it removed. If you run the same commands against a user mailbox, you’ll see the number of deleted items recorded in ElcLastRunDeletedFromRootItemCount. This doesn’t happen for Teams compliance records, perhaps because Exchange regards them as system items.

Compliance Items are Removed

Because MFA doesn’t tell us how many items it removes, we have to check the mailbox again. This time we see that the number of compliance records has shrunk from 227 to 3 and that the oldest item in the folder is from 20 July 2018. Given that users can’t access the Team Chat folder with clients like Outlook or OWA, the only way that items are removed is with a system process, so we can therefore conclude that MFA has done its work.

Get-MailboxFolderStatistics -Identity "HydraProjectTeam" -FolderScope ConversationHistory -IncludeOldestAndNewestItems | ? {$_.FolderType -eq “TeamChat”} | Format-Table Name, Itemsinfolder, Newestitemreceiveddate, Oldestitemreceiveddate

Name      ItemsInFolder NewestItemReceivedDate OldestItemReceivedDate
----      ------------- ---------------------- ----------------------
Team Chat             3 2 Aug 2018 16:10:41    20 Jul 2018 08:17:09

Synchronization to Teams

Background processes replicate the deletions made by MFA to Teams. It’s hard to predict exactly when items will disappear from user view because two things must happen. First, the items are removed from the Teams data store in Azure. Second, clients synchronize the deletions to their local cache.

In tests that I ran, it took between four and five days for the cycle to complete. For example, in the test reported above, MFA ran on August 2 and clients picked up the deletions on August 7.

You might not think that such a time lag is acceptable. Microsoft agrees, and is working to improve the efficiency of replication from Exchange Online to Teams. However, for now, you should expect several days to lapse before the effect of a retention policy is seen by clients.

The Effect of Retention

Figure 1 shows a channel after a retention policy removed any item older than 30 days. What’s immediately obvious is that some items older than 30 days are still visible. One is an item (Oct 25, 2017) created by an RSS connector to post notifications about new blog posts in the channel. The second (March 3, 2018) is from a guest user. The other visible messages are system messages, which Teams does not capture for compliance purposes.

Teams retention policy

Figure 1: Channel conversations after a retention policy runs (image credit: Tony Redmond)

The reason why the RSS item is still shown in Figure 1 is that items created in Teams by Office 365 connectors were not processed for compliance purposes until recently. They are now, and the most recent run of MFA removed the connector items. It is possible that Office 365 might fail to ingest some older items, in which case they will linger in channels because compliance records don’t exist.

We also see an old message posted by a guest user. Teams only began capturing hybrid user messages in January 2018, with an intention to go back in time for earlier messages as resources allow. Teams uses the same mechanism to capture guest user messages, but obviously Microsoft hadn’t processed back this far when I ran these tests. Other messages posted by guest users are gone because compliance records existed for these messages.

It’s worth noting that compliance records for guest-to-guest 1×1 chats are not processed by MFA. This is because MFA cannot access the phantom mailboxes used by Exchange to capture compliance records for the personal chats of guest and hybrid users. Guest contributions to channel conversations are processed because these items are in group mailboxes.

Some Tracking Tools Would be Useful

The Security and Compliance Center will tell you that a retention policy for Teams is on and operational (Status = Success), but after that an information void exists as to how the policy operates, when teams are processed, how many items are removed from channels and personal chats, and so on. There are no records in the Office 365 audit log and nothing in the usage reports either. All you can do is keep an eye on the number of compliance records in mailboxes.

Follow Tony on Twitter @12Knocksinna.

Want to know more about how to manage Office 365? Find what you need to know in “Office 365 for IT Pros”, the most comprehensive eBook covering all aspects of Office 365. Available in PDF and EPUB formats (suitable for iBooks) or for Amazon Kindle.

The post Proving that Teams Retention Policies Work appeared first on Petri.