Managing GDPR with Teams, Planner, and Compliance Manager

The content below is taken from the original ( Managing GDPR with Teams, Planner, and Compliance Manager), to continue reading please visit the site. Remember to respect the Author & Copyright.

Driving for Better Compliance

Following its announcement at Ignite 2017, Microsoft launched the preview of its Compliance Manager on November 16. The Compliance Manager is available to all organizations with a paid or trial subscription to a Microsoft cloud service, except tenants of the Office 365 datacenter regions in China and Germany.

Microsoft describes Compliance Manager as: “A dashboard that summarizes Microsoft’s and your organization’s control implementation progress for Office 365 across various standards and regulations, such as the EU General Data Protection Regulation (GDPR), ISO 27001, and ISO 27018.”

To access Compliance Manager, log into this site using your Microsoft cloud credentials.

Office 365 and GDPR

Although Azure is in the mix (due in early 2018), given the widespread presence of personal information (PII) in documents and email, I suspect that the new tool will be of interest to Office 365 tenants who operate anywhere in the European Union and the other countries, like Norway and Switzerland, where the General Data Protection Regulation (GDPR) becomes effective in six short months.

Office 365 already includes many compliance features to help an organization control data, including data loss prevention (DLP) and retention policies, classification labels, encryption and rights management for documents and email, content searches, and auditing. Some of the features are easier to use with higher-priced plans (like auto-label policies in Office 365 E5) and some will extra software (like Azure Information Protection P2).

The issue is not of having enough technology to control the misuse of PII; it’s more often the case that the people in the organization need help to understand what data needs protection and how best to protect the data.

The Compliance Manager Dashboard

Compliance Manager is a dashboard, but it is a passive instrument. Unlike other Office 365 dashboards like Secure Score or the Data Governance dashboard in the Security and Compliance Center, it does not try to analyze the settings of a target organization against any baselines to report gaps and problems. Microsoft intends to improve functionality in this area in the future and will generate a “compliance score” for a tenant.

For now, Compliance Manager lists standards and regulations that organizations and service providers might want to satisfy and delivers some practical advice about how tenants can start dealing with those standards. The plan is to add more standards to the dashboard over time. When I started Compliance Manager, it offered the option to work with GDPR and ISO 27001-2013 (Figure 1).

Compliance Manager dashboard

Figure 1: The Compliance Manager dashboard (image credit: Tony Redmond)

Controls

Each standard applied to a platform like Office 365 is decomposed into a set of controls. You can think of a control as something that either a service provider (in this case, Microsoft) or a tenant must do as part of the work to satisfy a regulation or meet a standard. The biggest benefit of the Compliance Manager is how Microsoft has broken down complex regulations like GDPR into the controls. For GDPR, 71 controls are assigned to Microsoft and 47 to the customer (see Jussi Roine’s review).

Microsoft’s controls are all passed after testing by an independent auditor. Given that all 71 controls are checked, one interpretation is that Microsoft believes that Office 365 satisfies GDPR, even if they have made no such claim. Microsoft does not say who carried out the audit or what plan or other software (like add-ons) the examined Office 365 tenant used. This is disappointing because a big difference exists in the compliance functionality available in different plans. For example, if you run Office 365 E5, you can deploy auto-label policies (part of advanced Office 365 data governance) to find and classify documents that hold PII data.

Assigning Work Through Controls

With 47 controls to satisfy, any Office 365 tenant has a lot of work to do to make sure that they can cope with GDPR. Compliance Manager tells them what needs to be done but gives no practical assistance to manage the actual work. You can assign people to work on a control (the list of names comes from the GAL), but you cannot assign a group or multiple people (Figure 2). And then you must tell the assignee that they have work to do because the email notification does not work yet (it’s coming soon).

GDPR control Compliance Manager

Figure 2: : Assigning someone to a GDPR control (image credit: Tony Redmond)

Of course, email assumes that an Office 365 tenant uses Exchange Online. Most do, but some do not.

You can also upload documents to Compliance Manager for each control. Presumably these are documents to prove that the work is done. But the documents are not stored inside Office 365. All in all, using the Compliance Manager to track work is an exhaustingly manual process.

Leveraging Office 365 to Satisfy GDPR

If Office 365 has anything, it possesses collaboration technology. Why not harness technology to automate what is essentially an exercise in paperwork that probably involved collaboration with people drawn from across the organization.

Two obvious candidates present themselves. Planner to track the tasks involved in satisfying controls and Teams for collaboration. Outlook or Yammer Groups could also be used, but Teams and Planner are more tightly integrated at this point.

Creating a GDPR Plan

To implement the solution, I first created a new plan with Planner. Creating a new plan also creates a new Office 365 Group, to which I added the people who would work on the GDPR controls as members. I then created a set of buckets in the plan matching the categories Microsoft uses to divide up the GDPR controls.

Next, I created a task for each control in the appropriate bucket and assigned it to the individuals responsible (Figure 3). The description is cut and pasted from the Compliance Center. You can tailor the text to meet the unique needs of the organization, add checklist items, and add attachments that the person assigned the task might need to understand what must be done. Planner also has colored tabs for tasks that could be used to indicate departments, like IT, Finance, Legal, and so on.

Planner for GDPR

Figure 3: Creating a task for a control (image credit: Tony Redmond)

After the tasks are created and assigned, it is easy to track progress through Planner (Figure 4). Although Planner has only a few graphs now, the Planner developers have promised that a new schedule view will be available soon.

GDPR progress with Planner

Figure 4: : Tracking progress towards GDPR (image credit: Tony Redmond)

Involving Teams

Teams also use Office 365 Groups for their identity and membership, so it did not take long to team-enable the group. I then added a Planner tab and connected it to the plan (Figure 5). Team members can collaborate to achieve the necessary controls. Any documents needed can be assembled in Teams and stored in the SharePoint document library for the group.

Using Teams with GDPR

Figure 5: Using Teams to collaborate on a GDPR control (image credit: Tony Redmond)

Voilà! I now have the ability for people to work through the controls necessary for the organization to satisfy GDPR.

Sponsored

Of course, it would be nice if Microsoft built the necessary intelligence into Compliance Manager to create the Office 365 Group, plan, and team and export the controls information to the plan, probably using the Microsoft Graph APIs. However, this is preview software and it therefore only the start of what might happen in the future. Feel free to automate the process yourself if you feel like a challenge!

Compliance is Difficult

Compliance is easy in concept, but difficult to implement in reality. People are always the weakest link. Microsoft’s Compliance Manager breaks down complex regulations into digestible chunks. Using collaboration software like Planner and Teams to help people work together to achieve prepare for something like GDPR just makes sense. Being able to base that activity on those digestible chunks is even better.

Follow Tony on Twitter @12Knocksinna.

Want to know more about how to manage Office 365? Find what you need to know in “Office 365 for IT Pros”, the most comprehensive eBook covering all aspects of Office 365. Available in PDF and EPUB formats (suitable for iBooks) or for Amazon Kindle.

The post Managing GDPR with Teams, Planner, and Compliance Manager appeared first on Petri.

Buoy uses AI and machine learning to keep your water bills low

The content below is taken from the original ( Buoy uses AI and machine learning to keep your water bills low), to continue reading please visit the site. Remember to respect the Author & Copyright.

Buoy is a device that puts machine learning to work to save on your water bill. The IoT device connects to your home's WiFi network and water supply to monitor how much is going where on a use-by-use basis (faucet shower, washing machine, etc..), in…

What is Microsoft Windows 10 Signature Edition?

The content below is taken from the original ( What is Microsoft Windows 10 Signature Edition?), to continue reading please visit the site. Remember to respect the Author & Copyright.

Windows 10 Signature

For users planning to buy a new Windows 10 PC, completely devoid of bloatware, Microsoft has an answer – Microsoft Windows 10 Signature Edition! This new line of PCs represents a stripped-down version of Windows that is stronger, faster and […]

This post What is Microsoft Windows 10 Signature Edition? is from TheWindowsClub.com.

Microsoft Office is now available for all Chromebooks

The content below is taken from the original ( Microsoft Office is now available for all Chromebooks), to continue reading please visit the site. Remember to respect the Author & Copyright.

It took its sweet time, but Microsoft Office for Android is now available on all Play Store-compatible Chromebooks, according to Chrome Unboxed. The software's convoluted journey en route to Google's laptops is well documented. As a recap, when Andro…

A certain millennial turned 30 this week: Welcome to middle age, Microsoft Excel

The content below is taken from the original ( A certain millennial turned 30 this week: Welcome to middle age, Microsoft Excel), to continue reading please visit the site. Remember to respect the Author & Copyright.

Launched a million business plans, sank Lotus…

Thirty is a ripe old age, maybe older than a good chunk of Register readers. Even for those of you for whom Excel is a spring chicken, how many applications or even operating systems are you still using of a similar age outside the Office suite?…

angryip (3.5.2)

The content below is taken from the original ( angryip (3.5.2)), to continue reading please visit the site. Remember to respect the Author & Copyright.

Fast and friendly network scanner.

Exim-ergency! Unix mailer has RCE, DoS vulnerabilities

The content below is taken from the original ( Exim-ergency! Unix mailer has RCE, DoS vulnerabilities), to continue reading please visit the site. Remember to respect the Author & Copyright.

Patch imminent, for now please turn off e-mail attachment chunking

Sysadmins who tend Exim servers have been advised to kick off their working weeks with the joy of patching.…

Control Your Roomba With IFTTT Commands

The content below is taken from the original ( Control Your Roomba With IFTTT Commands), to continue reading please visit the site. Remember to respect the Author & Copyright.

When the humans are away, the Roomba will play: Keep a closer eye on your iRobot vacuum with new IFTTT functionality. Using If This Then That applets, owners can trigger the autonomous cleaner […]

The post Control Your Roomba With IFTTT Commands appeared first on Geek.com.

Monitoring Service Limits with Trusted Advisor and Amazon CloudWatch

The content below is taken from the original ( Monitoring Service Limits with Trusted Advisor and Amazon CloudWatch), to continue reading please visit the site. Remember to respect the Author & Copyright.

Understanding your service limits (and how close you are to them) is an important part of managing your AWS deployments – continuous monitoring allows you to request limit increases or shut down resources before the limit is reached.

One of the easiest ways to do this is via AWS Trusted Advisor’s Service Limit Dashboard, which currently covers 39 limits across 10 services.

With the recent launch of Trusted Advisor metrics in Amazon CloudWatch, Business and Enterprise support customers can create customizable alarms for individual service limits. Let’s look at an example of how to do that.

Example: Create an alarm for EC2 On-Demand Instance limits

Begin by logging into the Trusted Advisor console and clicking the “Service Limits” link on the left side of the page. After doing so, you should see the new service limits dashboard.

Click the refresh icon in the top-right of the screen to retrieve the most current utilization and limit data for the service limit checks. This can take a while if you have a lot of AWS resources deployed, so be patient! Once complete, a notification will appear at the top of the Trusted Advisor console.

This ensures that CloudWatch has the most up-to-date information available from Trusted Advisor.

Next, head over to the CloudWatch console. Trusted Advisor passes a single metric called “ServiceLimitUsage” that represents the percentage of utilization versus the limit. You can filter these limits by region, limit, or service using the available dimensions.

Make sure you are in the US East (N. Virginia) region, then click the “Alarms” link on the left side of the page and then click the “Create Alarm” button.

Click “Service Limits by Region” under the “Trusted Advisor” category.

In the search bar, type “Overall On-Demand Instances”. This will filter the list of available limits down to the EC2 overall on-demand instance limits that are tracked by Trusted Advisor. Click the checkbox next to the limit for the us-east-1 region and click “Next”.

You can now set up your CloudWatch alarm for this limit. In this example, we’ve configured it to alarm when you’ve reached 60% of the EC2 On-Demand instance limit. Trusted Advisor updates these metrics once per week by default, but refreshing checks via the Trusted Advisor console or Support API will send updated metrics on demand.

You can repeat this process for any or all of the service limits covered by Trusted Advisor. For more information and a guide to creating these for any type of Trusted Advisor check, visit Creating Trusted Advisor Alarms Using CloudWatch.


About the Author

Scott Allison is the Senior Technical Product Manager for Trusted Advisor – an AWS service designed to keep your deployments secure, fault tolerant, and cost-effective. He enjoys working directly with customers and hearing the creative ways people use Trusted Advisor!

 

 

 

 

 

Legends of the scrawl: Ordnance Survey launches augmented reality tool for maps

The content below is taken from the original (Legends of the scrawl: Ordnance Survey launches augmented reality tool for maps), to continue reading please visit the site. Remember to respect the Author & Copyright.

More than a decade ago, boffins at the Ordnance Survey began working on augmented reality. Now consumer mobile tech has caught up and the agency has launched an AR tool for the man on the street and the woman in the hills.

It was 2006 when Andrew Radburn, a long-standing research scientist at the OS, published his work (PDF) developing a prototype for a handheld AR system using data from the nation’s mapping agency.

“In 2005, we were much more into blue skies – what’s going to happen in 10 years, what devices are going to be available?” Radburn told The Register in an interview to mark National Map Reading Week.

“We’d come across augmented reality in other fields, and seen the set ups other people had started to build. In Australia, a team had built a GPS with a large, crash-helmet size, head-mounted display – but we decided to take it in the other direction, and see what we could do that was cheap and without too much custom-built stuff.”

The result was this prototype: an A4-size tablet PC – which was used by surveyors at the time because it provided good visibility in daylight – with various bits of GPS kit attached. A camera was Velcro’d on the back, and a Honeywell compass taped on the side.

Original OS kit with duct tape

Radburn’s prototype, complete with duct tape

“So you know where you are, and which direction you’re pointing, and you then start overlaying information on top of the live camera feed – you point it at a building and get details on it,” Radburn said.

2006 version of AR tools developed by the OS

The kit in action

Of course, the team knew they didn’t yet have something a consumer would be able to use – but expected the essential ingredients to be integrated into one device in the future.

“We then parked it for a while, to wait for things to catch up,” Radburn said.

In the meantime, he said, the OS started working on the backend systems to deliver the data.

They picked the work back up again when mobile devices started delivering the accuracy and speed needed, and this is where another OS boffin, Layla Gordon, comes in.

In 2015, the OS revisited AR when it was asked to help attendees at the Digital Shoreditch event in London navigate their way around the dank basement of Shoreditch’s town hall.

“Even though it’s a small place, it’s laid out like a labyrinth and every year people get lost,” she said.

App for navigating the Shoreditch town hall basement

The app in the basement

It helped prove the concept worked, but without GPS reception in the depths of the town hall, the team had to use iBeacons as a position, which, Gordon said, “aren’t very successful in terms of accuracy”.

The OS then added a few more proof-of-concepts under its belt – including one to help staff and patients navigate Southampton General Hospital, and another for utilities companies to visualise underground pipes from over ground – although these haven’t been launched yet.

Instead, the first tool the OS has launched is for consumers.

“They asked me if it was possible to show a location in terms of a label to a location within a camera view,” Gordon said.

After discussing it with Radburn, they worked out how to establish where a person was looking and connect to a server that has information about that area, and layer that on top of the camera view.

The tool is now integrated into the OS Maps app, so walkers can hold their phone up to a landscape and see a series of labels pop up on their screen. This allows them to figure out what they’re looking at, regardless of whether they have phone signal.

And if they do have reception, the app also offers extra crucial information like walking routes, or local pubs.

Gordon said that the OS continues to keep its eye on the rest of the tech market, adding that – “as we’re not too far from consumer grade AR headsets” – it is hoping to collaborate with companies developing those sorts of solutions.

But of course, the OS has at its heart the paper map, and that’s where Gordon wants to focus AR next.

“I want to use it to trigger information,” she said, using the example of the three peaks challenge – where hikers will tackle all of the UK’s highest mountains one after the other.

“I’d like to put a 3D version of the peaks rising from the paper map, and add routes on top,” Gordon said.

Layla Gordon pointing tablet at 2D map of Mars to generate 3D image

Layla Gordon demos the idea with a 2D paper map of Mars

“At the moment, when the pages look flat, you can’t appreciate how challenging they might be. Not that many people are that good at interpreting contour lines; this way you can get a good idea of your route.”

And, despite the OS’s hopes for increased use of such tech when 5G is rolled out across the country and offers people more accuracy, it is using National Map Reading Week to encourage people to develop those skills.

After all, as we put it to the OS team, the technology can fail or reception can cut out – not to mention that kit like the Maps app have a tendency to sap your battery.

Gordon’s response? “I think that’s why we always recommend using a paper map.” ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Intel introduces an AI-oriented processor

The content below is taken from the original (Intel introduces an AI-oriented processor), to continue reading please visit the site. Remember to respect the Author & Copyright.

There are a number of efforts involving artificial intelligence (AI) and neural network-oriented processors from vendors such as IBM, Qualcomm and Google. Now, you can add Intel to that list. The company has formally introduced the Nervana Neural Network processor (NNP) for AI projects and tasks. 

This isn’t a new Intel design. The chips come out of Intel’s $400 million acquisition of a deep learning startup called Nervana Systems last year. After the acquisition, Nervana CEO Naveen Rao was put in charge of Intel’s AI products group. 

RELATED: Artificial intelligence in the enterprise: It’s on

“The Intel Nervana NNP is a purpose-built architecture for deep learning,” Rao said in a blog post formally announcing the chip. “The goal of this new architecture is to provide the needed flexibility to support all deep learning primitives while making core hardware components as efficient as possible.” 

He added: “We designed the Intel Nervana NNP to free us from the limitations imposed by existing hardware, which wasn’t explicitly designed for AI.” 

That’s an interesting statement, since Rao could be referring to the x86 architecture — or GPUs, since Nvidia’s CEO has never been shy about sniping at x86. 

Rao didn’t go into design specifics, only that the NNP does not have a standard cache hierarchy of an x86, and on-chip memory is managed by software directly. He also said the chip was designed with high speed on- and off-chip interconnects, enabling “massive bi-directional data transfer.” 

Using self-learning chips to develop AI applications 

Intel CEO Brian Krzanich had his own blog post on the subject. 

“Using Intel Nervana technology, companies will be able to develop entirely new classes of AI applications that maximize the amount of data processed and enable customers to find greater insights — transforming their businesses,” he wrote. 

Krzanich also revealed that Facebook was involved in the design of the processor, although he did not elaborate beyond saying Facebook worked with Intel “in close collaboration, sharing its technical insights.” 

Now, why would Facebook care? Because one of the potential uses for the NNP as described by Krzanich is in social media to “deliver a more personalized experience to their customers and offer more targeted reach to their advertisers.” 

Neuromorphic chips are inspired by the human brain and designed to be self-learning, so as they perform a task they get better at it and look at new ways of executing it. 

A recent documentary on the Japanese news channel NHK World illustrated that, where an AI application that practiced millions of games of shogi, a Japanese chess-like board game, came up with its own strategies that it had not been programmed into it, flabbergasting the developer and human player it roundly trounced. It thought for itself.

All of that reminds me of Ian Malcolm’s comment in Jurassic Park: “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.”

Join the Network World communities on

Facebook

and

LinkedIn

to comment on topics that are top of mind.

Automatically delete files in Downloads folder & Recycle Bin after 30 days in Windows 10

The content below is taken from the original (Automatically delete files in Downloads folder & Recycle Bin after 30 days in Windows 10), to continue reading please visit the site. Remember to respect the Author & Copyright.

Automatically delete files in Downloads folder & Recycle Bin after 30 days in Windows 10

Most of the people often download lots of files to their computer but forget to delete the unnecessary files. Windows 10 now introduces a feature wher you can automatically delete files in Downloads folder & Recycle Bin after 30 days.

If you use a file on a daily or even weekly basis, keeping that in your Downloads folder makes sense. However, many PC users often download files but forget about it after some days. The same thing happens with the Recycle Bin as well. Although we delete files from Desktop or other drives but often forget to remove them from the Recycle Bin.

To get rid of the potential low storage problems, Microsoft earlier included a feature called Storage Sense.

Automatically delete files in Download folder & Recycle Bin after 30 days in Windows 10

However, if you download the Windows 10 Fall Creators Update v 1709, you can get even more features alongside Storage Sense. Now you would be able to delete files from Recycle Bin as well as Download folder automatically after 30 days.

Delete files in Downloads folder & Recycle Bin after 30 days

This feature is included in Windows Settings panel. Open it by pressing Win+I and go to System > Storage. On your right-hand side, you will find an option called Storage Sense. If this is turned off, toggle the button to turn it on.

At the same place, you will see another option called Change how we free up space. Click on it to set it up. On the next page, you will see three options-

  • Delete temporary files that my apps aren’t using
  • Delete files that have been in the recycle bin for over 30 days
  • Delete files in the Downloads folder that haven’t changed in 30 days

Automatically delete files in Download folder & Recycle Bin after 30 days in Windows 10

You need to check the 2nd and 3rd options. You can also use all three of them if you want to remove all the temporary files that your apps have used earlier but aren’t using anymore.

Do remember to not keep any useful files file in the Download folder as they will get automatically deleted after 30 days henceforth.

Anand Khanse is the Admin of TheWindowsClub.com, a 10-year Microsoft MVP Awardee in Windows (2006-16) & a Windows Insider MVP. Please read the entire post & the comments first, create a System Restore Point before making any changes to your system & be careful about any 3rd-party offers while installing freeware.

SegaPi Zero Shows Game Gear Some Respect

The content below is taken from the original (SegaPi Zero Shows Game Gear Some Respect), to continue reading please visit the site. Remember to respect the Author & Copyright.

If you were a gamer in 1991, you were presented with what seemed like an easy enough choice: you could get a Nintendo Game Boy, the gray brick with a slightly nauseating green-tinted screen that was already a couple of years old, or you could get yourself a glorious new Sega Game Gear. With full color display and games that were ported straight from Sega’s home consoles, it seemed like the Game Gear was the true future of portable gaming. But of course, that’s not how things actually went. In reality, technical issues like abysmal battery life held the Game Gear back, and conversely Nintendo and their partners were able to squeeze so much entertainment out of the Game Boy that they didn’t even bother creating a true successor for it until nearly a decade after its release.

While the Game Gear was a commercial failure compared to the Game Boy back in the 1990s and never got an official successor, it’s interesting to think of what may have been. A hypothetical follow-up to the Game Gear was the inspiration for the SegaPi Zeo created by [Halakor]. Featuring rechargeable batteries, more face buttons, and a “console” mode where you can connect it to a TV, it plays to the original Game Gear’s strengths and improves on its weaknesses.

As the name implies the SegaPi Zero is powered by the Raspberry Pi Zero, and an Arduino Pro Micro handles user input by tactile switches mounted behind all the face buttons. A TP4056 charging module and step-up converter are also hiding in there, which take care of the six 3.7 lithium-Ion 14500 batteries nestled into the original battery compartments. With a total capacity of roughly 4,500 mAh, the SegaPi Zero should be able to improve upon the 3 – 4 hour battery life that helped doom the original version.

There’s no shortage of projects that cram a Raspberry Pi into a classic game system, but more often than not, they tend to be Nintendo machines. It could simply be out of nostalgia for Nintendo’s past glories, but personally we’re happy to see another entry into the fairly short list of Sega hacks.

Filed under: classic hacks, Raspberry Pi

How to Create Simple Server Monitors with PowerShell

The content below is taken from the original (How to Create Simple Server Monitors with PowerShell), to continue reading please visit the site. Remember to respect the Author & Copyright.

http://bit.ly/2ikFpJF

Mapping the blockchain project ecosystem

The content below is taken from the original (Mapping the blockchain project ecosystem), to continue reading please visit the site. Remember to respect the Author & Copyright.

Josh Nussbaum
Contributor

Joshua Nussbaum is a partner at the New York-based venture firm, Compound.

Blockchain technology, cryptocurrencies, and token sales are all the rage right now. In the 5+ years I’ve been working in the VC industry, this is by and large the fastest I’ve seen any area of technology take off in terms of new company (or project) formation.

It wasn’t too long ago that founders and VCs were mainly focused on centralized exchanges, enterprise or private blockchain solutions, wallets, amongst several other popular blockchain startup ideas that dominated the market from 2012 to somewhere around 2016.

However, as I wrote about a few months ago, the rise of Ethereum with its Turing-complete scripting language and the ability for developers to include state in each block, has paved the way for smart contract development. This has led to an influx of teams building decentralized projects seeking to take advantage of the most valuable property of blockchains — the ability to reach a shared truth that everyone agrees on without intermediaries or a centralized authority.

There are many exciting developments coming to market both in terms of improving existing blockchain functionality as well as the consumer’s experience. However, given the rapid pace at which projects are coming to market, I’ve found it to be difficult to keep track of each and every project and where each one fits into the ecosystem.

Furthermore, it’s easy to miss the forest for the trees without a comprehensive view of what the proverbial forest looks like.

As a result, here’s a compiled a list of all of the decentralized blockchain-based projects that I have been following, and was able to dig up through research, along with recommendations from friends in the ecosystem.

A quick disclaimer: While it’s difficult to pigeonhole a number of projects into one category, I did my best to pinpoint the main purpose or value proposition of each project and categorize them as such. There are certainly many projects that fall into the gray area and could fit into multiple categories. The hardest ones are the “fat protocols” which offer functionality in more than a couple of areas.

Below is an overview of each broader category I’ve identified, touching on some of the subcategories that comprise them:

                           Currencies

For the most part, these projects were created with the intention of building a better currency for various use cases and represent either a store of value, medium of exchange, or a unit of account.

While Bitcoin was the first and is the most prominent project in the category, many of the other projects set out to improve upon a certain aspect of Bitcoin’s protocol or tailor it towards a specific use case.

The Privacy subcategory could probably fall into either the Payments or Base Layer Protocols categories, but I decided to break them out separately given how important anonymous, untraceable cryptocurrencies (especially Monero and ZCash) are for users who would like to conceal a transaction because they prefer not to broadcast a certain purchase for one reason or another, or for enterprises who don’t want to reveal trade secrets.

    Developer Tools                 

Projects within this category are primarily used by developers as the building blocks for decentralized applications. In order to allow users to directly interact with protocols through application interfaces (for use cases other than financial ones), many of the current designs that lie here need to be proven out at scale.

Protocol designs around scaling and interoperability are active areas of research that will be important parts of the Web3 development stack.

In my opinion, this is one of the more interesting categories at the moment from both an intellectual curiosity and an investment standpoint.

In order for many of the blockchain use cases we’ve been promised to come to fruition such as fully decentralized autonomous organizations or a Facebook alternative where users have control of their own data, foundational, scalable infrastructure needs to grow and mature.

Many of these projects aim at doing just that.

Furthermore, these projects aren’t in a “winner take all” area in the same way that say a cryptocurrency might be as a store of value.

For example, building a decentralized data marketplace could require a a number of Developer Tools subcategories such as Ethereum for smart contracts, Truebit for faster computation, NuCypherfor proxy re-encryption,ZeppelinOS for security, and Mattereum for legal contract execution to ensure protection in the case of a dispute.

Because these are protocols and not centralized data silos, they can talk to one another, and this interoperability enables new use cases to emerge through the sharing of data and functionality from multiple protocols in a single application.

Preethi Kasireddy does a great job of explaining this dynamic here.

                       Fintech

This category is fairly straightforward. When you’re interacting with a number of different protocols and applications (such as in the Developer Tools example above), many may have their own native cryptocurrency, and thus a number of new economies emerge.

In any economy with multiple currencies, there’s a need for tools for exchanging one unit of currency for another, facilitating lending, accepting investment, etc.

The Decentralized Exchanges (DEX) subcategory could arguably have been categorized as Developer Tools.

Many projects are already starting to integrate the 0x protocol and I anticipate this trend to continue in the near future. In a world with the potential for an exorbitant number of tokens, widespread adoption of applications using several tokens will only be possible if the complexity of using them is abstracted away — a benefit provided by decentralized exchanges.

Both the Lending and Insurance subcategories benefit from economies of scale through risk aggregation.

By opening up these markets and allowing people to now be priced in larger pools or on a differentiated, individual basis (depending on their risk profile), costs can decrease and therefore consumers should in theory win.

Blockchains are both stateful and immutable so because previous interactions are stored on chain, users can be confident that the data that comprises their individual history hasn’t been tampered with.

Sovereignty                                  

As the team at Blockstack describes in their white paper:

“Over the last decade, we’ve seen a shift from desktop apps (that run locally) to cloud-based apps that store user data on remote servers. These centralized services are a prime target for hackers and frequently get hacked.”

Sovereignty is another area that I find most interesting at the moment.

While blockchains still suffer from scalability and performance issues, the value provided by their trustless architecture can supersede performance issues when dealing with sensitive data; the safekeeping of which we’re forced to rely on third parties for today.

Through cryptoeconomics, users don’t need to trust in any individual or organization but rather in the theory that humans will behave rationally when correctly incentivized.

The projects in this category provide the functionality necessary for a world where users aren’t forced to trust in any individual or organization but rather in the incentives implemented through cryptography and economics.

Value Exchange

A key design of the Bitcoin protocol is the ability to have trust amongst several different parties, despite there being no relationship or trust between those parties outside of the blockchain. Transactions can be created and data shared by various parties in an immutable fashion.

It’s widely considered fact that people begin to organize into firms when the cost of coordinating production through a market is greater than within a firm individually.

But what if people could organize into this proverbial “firm” without having to trust one another?

Through blockchains and cryptoeconomics, the time and complexity of developing trust is abstracted away, which allows a large number people to collaborate and share in the profits of such collaboration without a hierarchical structure of a traditional firm.

Today, middlemen and rent seekers are a necessary evil in order to keep order, maintain safety, and enforce the rules of P2P marketplaces. But in many areas, these cryptoeconomic systems can replace that trust, and cutting out middlemen and their fees will allow users to exchange goods and services at a significantly lower cost.

The projects in the subcategories can be broken down into two main groups: fungible and non-fungible. Markets that allow users to exchange goods and services that are fungible will commoditize things like storage, computation, internet connectivity, bandwidth, energy, etc. Companies that sell these products today compete on economies of scale which can only be displaced by better economies of scale.

By opening up latent supply and allowing anyone to join the network (which will become easier through projects like 1Protocol) this no longer becomes a daunting task, once again collapsing margins towards zero.

Non-fungible markets don’t have the same benefits although they still allow providers to earn what their good or service is actually worth rather than what the middlemen thinks it’s worth after they take their cut.

Shared Data

One way to think about the shared data layer model is to look at the airline industry’s Global Distribution Systems (GDS’s). GDS’s are a centralized data warehouse where all of the airlines push their inventory data in order to best coordinate all supply information, including routes and pricing.

This allows aggregators like Kayak and other companies in the space to displace traditional travel agents by building a front end on top of these systems that users can transact on.

Typically, markets that have been most attractive for intermediary aggregators are those in which there is a significant barrier to entry in competing directly, but whereby technological advances have created a catalyst for an intermediary to aggregate incumbents, related metadata, and consumer preferences (as was the case with GDS’s).

Through financial incentives provided by blockchain based projects, we’re witnessing the single most impactful technological catalyst which will open up numerous markets, except the value no longer will accrue to the aggregator but rather to the individuals and companies that are providing the data.

In 2015, Hunter Walk wrote that one of the biggest missed opportunities of the last decade was eBay’s failure to open up their reputation system to third parties which would’ve put them at the center of P2P commerce.

I’d even take this a step further and argue that eBay’s single most valuable asset is reputation data which is built up over long periods of time, forcing user lock-in and granting eBay the power to levy high taxes on its users for the peace of mind that they are transacting with good actors. In shared data blockchain protocols, users can take these types of datasets with them as other applications hook into shared data protocols, reducing barriers to entry; increasing competition and as a result ultimately increasing the pace of innovation.

The other way to think about shared data protocols can be best described using a centralized company, such as Premise Data, as an example. Premise Data deploys network contributors who collect data from 30+ countries on everything from specific food/beverage consumption to materials used in a specific geography.

The company uses machine learning to extract insights and then sells these datasets to a range of customers. Rather than finding and hiring people to collect these datasets, a project could be started that allows anyone to collect and share this data, annotate the data, and build different models to extract insights from the data.

Contributors could earn tokens which would increase in value as companies use the tokens to purchase the network’s datasets and insights. In theory, the result would be more contributors and higher quality datasets as the market sets the going rate for information and compensates participants accordingly relative to their contribution.

There are many similar possibilities as the “open data platform” has been a popular startup idea for a few years now with several companies finding great success with the model. The challenge I foresee will be in sales and business development.

Most of these companies sell their dataset to larger organizations and it will be interesting to see how decentralized projects distribute theirs in the future. There are also opportunities that weren’t previously possible or profitable as a standalone, private organization to pursue, given that the economics don’t work for a private company.

Authenticity

Ultimately, cryptocurrencies are just digital assets native to a specific blockchain and projects in this category are using these digital assets to represent either real world goods (like fair tickets) or data.

The immutability of public blockchains allows network participants to be confident in the fact that the data written to them hasn’t been tampered with or changed in any way and that it will be available and accessible far into the future.

Hence why, for sensitive data or markets for goods which have traditionally been rife with fraud, it would make sense to use a blockchain to assure the user of their integrity.

Takeaways

While there’s a lot of innovation happening across all of these categories, the projects just getting started that I’m most excited about are enabling the web3 development stack by providing functionality that’s necessary across different use cases, sovereignty through user access control of their data, as well as fungible value exchange.

Given that beyond financial speculation we’ve yet to see mainstream cryptocurrency use cases, infrastructure development and use cases that are vastly superior for users in either cost, privacy, and/or security in extremely delicate areas (such as identity, credit scoring, VPN’s amongst others) seem to be the most likely candidates to capture significant value.

Longer-term, I‘m excited about projects enabling entire ecosystems to benefit from shared data and the bootstrapping of networks (non-fungible value exchange). I’m quite sure there are several other areas that I’m not looking at correctly or haven’t been dreamt up yet!

As always if you’re building something that fits these criterion or have any comments, questions or points of contention, I’d love to hear from you.

Thank you to Jesse Walden, Larry Sukernik, Brendan Bernstein, Kevin Kwok, Mike Dempsey, Julian Moncada, Jake Perlman-Garr, Angela Tran Kingyensand Mike Karnjanaprakorn for all your help on the market map and blog post.

Disclaimer: Compound is an investor in Blockstack and two other projects mentioned in this post which have not yet been announced.

*This article first appeared on Medium and has been republished courtesy of Josh Nussbaum. 

Cloud Service Map for AWS and Azure Available Now

The content below is taken from the original (Cloud Service Map for AWS and Azure Available Now), to continue reading please visit the site. Remember to respect the Author & Copyright.

Today, we are pleased to introduce a new cloud service map to help you quickly compare the cloud capabilities of Azure and AWS services in all categories. Whether you are planning a multi-cloud solution with Azure and AWS, or simply migrating to Azure, you will be able to use this service map to quickly orient yourself with the services required for a successful migration. You can use the service map side-by-side with other useful resources found in our documentation.

image

Excerpt from the Compute Section from the Cloud Service Map for AWS and Azure

The cloud service map (PDF available for download) is broken out into 13 sections to make navigation between each service simple:

  1. Marketplace – Cloud marketplace services bring together native and partner service offerings to a single place, making it easier for customers and partners to understand what they can do.
  2. Compute – Compute commonly refers to the collection of cloud computing resources that your application can run on.
  3. Storage – Storage services offer durable, highly-available, and massively-scalable cloud storage for your application, whether it runs in the cloud or not.
  4. Networking & Content Delivery – Allows you to easily provision private networks, connect your cloud application to your on-premises datacenters, and more.
  5. Database – Database services refers to options for storing data, whether it’s a managed relational SQL database that’s globally distributed or multi-model NoSQL databases designed for any scale.
  6. Analytics and big data – Make the most informed decision possible by analyzing all of the data you need in real time.
  7. Intelligence – Intelligence services enable natural and contextual interaction within your applications, using machine learning and artificial intelligence capabilities that include text, speech, vision, and search.
  8. Internet of Things (IoT) – Internet of Things (IoT) services connect your devices, assets, and sensors to collect and analyze untapped data.
  9. Management & monitoring – Management and monitoring services provide visibility into the health, performance, and utilization of your applications, workloads, and infrastructure.
  10. Mobile services – Mobile services enable you to reach and engage your customers everywhere, on every device. DevOps services make it easier to bring a higher quality app to market faster, and a number of engagement services make it easier to deliver performant experiences that feel tailored to each user.
  11. Security, identity, and access – A range of capabilities that protect your services and data in the cloud, while also enabling you to extend your existing user accounts and identities, or provisioning entirely new ones.
  12. Developer tools – Developer tools empower you to quickly build, debug, deploy, diagnose, and manage multi-platform, scalable apps and services.
  13. Enterprise integration – Enterprise integration makes it easier to build and manage B2B workflows that integrate with third-party software-as-a-service apps, on-premises apps, and custom apps.

The guidance is laid out in a convenient table that lets you easily locate and learn more about each service you are most interested in. In this instance, you can quickly see the service name, description and the name of the services in AWS and Azure. We’ve also provided hyperlinks for each Azure service.

Beyond service mapping, it’s worth noting that the Azure documentation provides a large array of additional resources to help app developers be successful with Azure. Here are just a few links to help get you started:

Thanks for reading and keep in mind that you can learn more about Azure by following our blogs or Twitter account. You can also reach the author of this post on Twitter.

Athletes should be implanted with microchips in order to catch drug cheats, says Olympians’ chief

The content below is taken from the original (Athletes should be implanted with microchips in order to catch drug cheats, says Olympians’ chief), to continue reading please visit the site. Remember to respect the Author & Copyright.

We microchip dogs, so why shouldn’t we microchip athletes, says World Olympians Association chief

Professional athletes should be fitted with microchips in an attempt to catch dopers, according to the head of an organisation representing Olympic athletes.

Mike Miller, chief executive of the World Olympians Organisation, said that anti-doping authorities needed to be prepared to implement radical new methods of drug detection if they are going to ensure clean sport.

Speaking at a Westminster Media Forum on Tuesday, Miller said that technological developments meant that microchips implanted in athletes’ bodies would soon be able detect the use of banned substances.

>>> American amateur rider tests positive for seven banned drugs in single doping test

“The problem with the current anti-doping system is that all it says is that at a precise moment in time there are no banned substances,” Miller said.

“We need a system which says you are illegal substance free at all times, and if there are marked changes in markers, they will be detected.

“Some people say we shouldn’t do this to people. Well, we’re a nation of dog lovers; we chip our dogs. We’re prepared to do that and it doesn’t seem to harm them. So, why aren’t we prepared to chip ourselves?”

>>> Dr Hutch: The absurdity of doping excuses

However Nicole Sapstead, chief executive of UK Anti-Doping, was sceptical of Miller’s suggestion when she spoke at the same event, saying that there needed to be assurances that the microchips could not be tempted, and raising concerns about whether the technology could be an invasion of athletes’ privacy.

“We welcome verified developments in technology which could assist the fight against doping,” Sapstead said.

“However, can we ever be sure that this type of thing could never be tampered with or even accurately monitor all substances and methods on the prohibited list?

“There is a balance to be struck between a right to privacy versus demonstrating that you are clean. We would actively encourage more research in whether there are technologies in development that can assist anti-doping organisations in their endeavours.”

Beware the GDPR ‘no win, no fee ambulance chasers’ – experts

The content below is taken from the original (Beware the GDPR ‘no win, no fee ambulance chasers’ – experts), to continue reading please visit the site. Remember to respect the Author & Copyright.

The UK’s incoming data protection laws could bring with them a wave of “no win, no fee”-style companies, experts have said.

Much of the discussion about the impact of the EU General Data Protection Regulation – which comes into force in May 2018 – has focused on the fines regulators can impose.

Although these are large – up to 4 per cent of annual turnover or €20m – lawyers and tech execs have said a surge in class-action suits could be a bigger financial burden.

“One point I think people miss when they’re looking at GDPR is that they are always looking at the regulator,” Julian Box, CEO of cloud biz Calligo, said during a roundtable event in London today.

“But the real challenge is going to come from people asking, ‘what data do you have on me?'”

Box argued that people who realised companies were holding data that they shouldn’t, could file a class-action suit – and that the costs related to that would “dwarf” those handed down by a regulator.

He added that some firms would be quick to latch on to the prospect, encouraging people to ask companies what data they hold on them and offering to assess whether they had a right to sue, on a no win, no fee basis.

“We truly think you’re going to see ambulance chasers here,” Box said.

Robert Bond, partner at law firm Bristows, agreed that there would be attempts to tap into this market, and that this would be especially so after a data breach.

“The area we foresee [being used] is emotional distress – having ambulance-chasing lawyers saying, ‘have you lost sleep because your data might have been exposed?'” Bond said.

“You can imagine, if a million people make a claim of £1,000 each, that dwarves any of the other fines.”

Fines

He added that, with the cost of notifying data subjects about the breach, possible fines from the regulator and related brand damage or falling share process, there could be a “perfect storm” of costs.

Neil Stobart, global technical director at Cloudian, agreed, saying: “You can guarantee there will be a whole industry out there.”

The panel – which also included Noris Iswaldi, who leads global GDPR consulting at EY, and Peter O’Rourke, the director of IT at the University of Suffolk – said the biggest challenge is that firms often don’t know what data they collect, or where it is held.

In addition, there is a belief among some senior teams that this is a problem for the tech team to solve – for instance, by failing to realise the regulation covers paper records, or mistakenly thinking there is a simple technical solution to seeking out data held on their systems.

“The struggle is they think it can be solved by IT and it cannot,” said Stobart. “The data owner needs to look at their data, and say whether it’s relevant for them to keep.”

Other panellists agreed, saying that companies often try to hang on to data in the hope it will one day be valuable to the business.

“There’s a real cultural hump to get over, where companies have to get over the idea that it’s the data,” said Adam Ryan, chief commercial officer at Calligo. “But once you’ve got that moment of enlightenment, you can have much easier conversations.” ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

pi-top outs a new laptop for budding coders and hardware hackers

The content below is taken from the original (pi-top outs a new laptop for budding coders and hardware hackers), to continue reading please visit the site. Remember to respect the Author & Copyright.

U.K. edtech startup pi-top has a new learn-to-code-and-tinker machine. It’s another modular Raspberry Pi-powered laptop but this time they’ve reduced the number of steps needed to put it together, as well as adding a slider keyboard design so the Qwerty panel can be pulled out to provide access to a rail for mounting and tinkering with electronics.

It’s a neat combination of the original pi-top laptop concept and the lower cost pi-topCeed desktop which has a rail below the screen where add-on electronics can also be attached. (The $150 price-tag on the latter device has made it a popular options for schools and code clubs wanting kit for STEM purposes, according to the startup.)

The new pi-top laptop is pi-top’s most expensive edtech device yet; priced at $319 with a Raspberry Pi 3 (or $284.99 without).

But it comes bundled with what the startup is calling an “inventor’s kit” — essentially a pre-picked selection of electronics components to enable a range of hardware DIY projects. Projects it says can be built using this kit include a music synth and the robot (pictured above).

It also says its software includes step-by-step guides describing “dozens of invention pathways” for tinkering and building stuff using the components in the inventor’s kit.

 

The laptop itself has a 14 inch 1080p LCD color display; comes with an 8GB SD card for storage (and built-in cloud management for additional storage and remote access to data); and battery life that’s slated as good for 8 hours+ use.

As well as tinkerable electronics, the team develops their own OS (called Pi-TopOS Polaris), running on the Raspberry Pi engine powering the hardware, as well as learn to code software and STEM-focused games (such as a Civilization-style MMORPG called CEEDUniverse).

Another UK startup, Kano, plays in a similar space — and has just announced its own learn to code ‘laptop’. However pi-top’s device looks considerably more sophisticated, both in terms of tinkering possibilities and software capabilities. (Though the Kano kit is priced a little cheaper, at $250.)

The pi-top laptop’s bundled software suite not only supports web browsing but the startup says there’s access to a full Microsoft Office compatible work suite. And it touts its learning software suite being the only one so far to be endorsed by the Oxford Cambridge RSA Review Board, chalking up another STEM credential.

Just under a year ago the London-based startup closed a $4.3M Series A funding round to push their STEM platform globally.

They now say their hardware platform is used in more than 1,500 schools around the world — which is up from more than 500 just under a year ago. While the team now ships devices to more than 80 countries.

“pi-top’s mission is to provide powerful, inspiring products that bring science, technology, engineering, arts and mathematics to life,” said CEO Jesse Lozano, commenting in a statement. “Our newest-generation of modular laptops helps achieve that goal. Now, anyone from young musicians to scientists to software developers to inventors can explore and create wonderful new projects using the pi-top laptop.

“We’re offering learning beyond the screen and keyboard, enabling wider exploration of computer science and basic electronics, ensuring that young learners have the opportunity to be inspired by a world of STEAM-based learning.”

What Is Azure File Sync

The content below is taken from the original (What Is Azure File Sync), to continue reading please visit the site. Remember to respect the Author & Copyright.

Have you ever struggled with file server capacity? Would you like the same file shares to be available in multiple offices? Would you like to centralized the backup of file shares? Would you like all of that, but make it transparent and without compromising performance for users? Read on!

 

 

A New Hybrid Service

Microsoft has managed to introduce Azure services into business without customers necessarily relocating or deploying applications in Azure. Services such as StorSimple (tiered storage appliance), Azure Backup, and Azure Site Recovery (disaster recovery) supplement existing investments in IT with cloud-based or cloud-first storage, backup, and DR solutions.

Azure File Sync is another of these kinds of services that solves problems with the good ole’ file server, which exists despite the best efforts of SharePoint, OneDrive for Business and other efforts from Microsoft, its partners, and competitors. Azure File Service is in preview now and Microsoft is keen for you to test it and give your feedback.

Synchronization

The first function of Azure File Sync is to synchronize file shares (data and ACLs) to a (general) storage account using the Azure Files service. One can create a sync group and then specify a path on a file server to synchronize to Azure. A non-disruptive agent is installed onto the file server, meaning that there is no need to relocate data onto different volumes to take advantage of this service. Non-system data volumes are supported. The results of this are:

  • Azure File shares are created in the storage account.
  • Azure becomes the master copy of the shares – more on this later.
  • End users have no idea that this has happened.
  • Changes to on-premises data are synchronized in real time to Azure.

Note that remote users can connect to file shares using a Net Use command to mount the share across the Internet; latency and bandwidth will have an impact on performance.

Inter-Office Sharing

A company with more than one office might require file share content to exist in more than one office. It is possible to synchronize the master shares in Azure to multiple file servers in different offices.This means that many offices can have the same file share with synchronization via the master copy in Azure.

Unfortunately, Azure File Sync does not synchronize file locks. Microsoft is aware of this demand and will work on it after general availability. If users in two sites update the same file at the same time, the second save will result in a copy of the original file. No data is lost.

 

Sponsored

 

Inter-Region Sharing

A larger corporation might have file servers in different continents and wish to deploy common file shares. Azure Files will be able to replicate across Azure regions. A file server can then be configured to connect to the closest replica of the file shares in Azure. Latency will be minimized and we get to take advantage of Azure’s high-speed backbone for inter-region synchronization.

Backup

Backing up branch office file servers is a pain in the you-know-where. If a branch office file server is synchronizing to Azure, then all of the data is in a nice central place that is perfect for backup. And Microsoft recognized that possibility.

Azure Backup will be used to backup shares in Azure Files. During the preview of Azure Files, backups will:

  • Be done using a new snapshot feature in storage accounts. The snapshots remain in the storage account that contains the synchronized Azure Files.
  • Offer up to 120 days of retention.

Microsoft acknowledged that the above backup would not be ideal. So, with general availability the solution changes:

  • Backups will be stored outside of the storage account in the recovery services vault.
  • Long-term retention will be possible for organizations that have regulatory requirements.

Disaster Recovery

Imagine that you lose a file server in an office. Azure File Sync has a DR solution that is very similar to the DR solution in StoreSimple. You can deploy a new file server, connect it up to the shares in Azure, and the metadata of the shares will be downloaded. At that time, end users can see and use the shares and their data. Over time, files will be downloaded to the file server.

One could see how this DR solution could also be used to seed a file server in a new branch office with existing shares.

Cloud Tiering

Now we get to one of the best features of Azure File Sync. I cannot remember the last time I saw a file server that was not struggling with disk capacity. Most of the data on that file server is old, never used, but cannot be deleted. No one has the time to figure out what’s not being used and no one wants to risk deleting something that will be required in the future.

You can enable a synchronization policy on a per-file server basis. Different servers connected to the same shares can have different policies. The tiering policy allows you to specify what percentage of the data should be synchronized to Azure. Cold files will be removed from the file server with the copies remaining in the master in the Azure storage account.

This tiering is seamless. The only clues that the end user will have that the cold files are in Azure are:

  • The icons are greyed.
  • An offline (O) file attribute.

When the user browses a share, on-premises copies and online (cold files tiered to Azure) files appear side by side. If a file format supports streaming, then that file will be streamed to the client via the file server.

Microsoft is using some interesting terminology in this scenario. The file server is being referred to as a caching device for performance. All of the data in the file shares are in Azure. With tiering enabled, we are keeping the hot files on-premises with a reduced storage requirement. This means that we have effectively relocated shares to Azure but are using file servers as caches to maintain LAN-speed performance.

Anti-Virus and On-Premises Backup

If you have enabled tiering, then you need to be very careful. A small number of well-known anti-virus solutions have been tested and verified as working. The worry is that a scan will cause online-only files to be downloaded by a scheduled scan.

Once you enable tiering, you must not do an on-premises backup. Think about it; the backup is going to cause those online-only files to be downloaded for backup. Instead, use Azure Backup to do your backups.

Availability

Microsoft has documented a lot of information about Azure File Sync. During this point of the preview, the service is only available in the following Azure regions:

  • West US
  • West Europe
  • South East Asia
  • Australia East

The preview supports file servers running:

  • Windows Server 2012 R2 (Full with UI)
  • Windows Server 2016 (Full with UI)
Sponsored

Opinion

I have known about Azure File Sync for quite a while under NDA. I have been itching to start talking about it because I see it as a killer service that an incredible number of businesses could benefit from. I would have loved to have has Azure File Sync when I last ran infrastructure in branch offices. If you struggle with file server capacity, or file servers in branch office deployments wreck your head, then I would strongly encourage you to get on the preview. Get to know this service and give Microsoft your feedback.

The post What Is Azure File Sync appeared first on Petri.

Developer Mailing List Digest September 30 – October 6

The content below is taken from the original (Developer Mailing List Digest September 30 – October 6), to continue reading please visit the site. Remember to respect the Author & Copyright.

Summaries

Sydney Forum Schedule Available

TC Nomination Period Is Now Over

Prepping for the Stable/Newton EOL

  • The published timeline is:
    • Sep 29 : Final newton library releases
    • Oct 09 : stable/newton branches enter Phase III
    • Oct 11 : stable/newton branches get tagged EOL
  • Given that those key dates were a little disrupted, Tony Breeds is proposing adding a week to each so the new timeline looks like:
    • Oct 08 : Final newton library releases
    • Oct 16 : stable/newton branches enter Phase III
    • Oct 18 : stable/newton branches get tagged EOL
  • Thread

Policy Community Wide Goal Progress

Tempest Plugin Split Community Wide Goal Progress

  • The goal
  • The reviews
  • List of projects which have already completed the goal:
    • Barbican
    • Designate
    • Horizon
    • Keystone
    • Kuryr
    • Os-win
    • Sahara
    • Solum
    • Watcher
  • List of projects which are working on the goal:
    • Aodh
    • Cinder
    • Magnum
    • Manila
    • Murano
    • Neutron
    • Neutron L2GW
    • Octavia
    • Senlin
    • Zaqar
    • Zun
  • Message

Choosing an Azure Virtual Machine – September 2017

The content below is taken from the original (Choosing an Azure Virtual Machine – September 2017), to continue reading please visit the site. Remember to respect the Author & Copyright.

This post will explain how to select an Azure virtual machine series and size, including updates to past versions of this post, adding the D_v3, E_v3, and L_v2 virtual machines, as well as using the Azure Compute Unit (ACU) measurement.

 

Order From The Menu

Azure is McDonald’s, not a Michelin Star restaurant. You cannot say, “I would like a machine with 4 cores and 64GB RAM and a 200GB C: drive.” That simply is not possible in Azure. Instead, there is a pre-set list of series of machines and within those series, there are pre-set sizes.

Unless you upload your own template, the size of the OS disk is always the same. It does not matter what the pricing pages claim as the disk size. (It is actually the size of the temp drive.):

  • Standard (HDD) un-managed disk: 127GB
  • Premium (SDD) un-managed disk or Standard/Premium managed disk: 128GB

Any data you have goes into a data drive, which you specify the size of (and therefore control the cost). Remember that storage (OS and data disks) costs extra!

Sizing a Virtual Machine

There are two basic things to consider here. The first is quite common sense. The machine will need as much RAM, CPU (see Azure Compute Units later in this article), and disk as your operating system and service(s) will consume. This is no different than how you sized on-premises physical or virtual machines in the past.

Other elements of capacity that are dictated by the size of the machine include:

  • Disk throughput
  • Maximum number of data disks
  • Maximum number of NICs
  • Maximum bandwidth

The other factor of cloud-scale computing is that you should deploy an army of ants, not a platoon of giants. Big virtual machines are extremely expensive. A more affordable way to scale is to deploy smaller machines that share a workload and can be powered on (billing starts) and off (billing stops) based on demand or possibly using the Scale Sets feature.

Sponsored

Azure Compute Units (ACUs)

Microsoft created the concept of an ACU to help us distinguish between the various processor and virtual machine series options that are available to us in Azure. The low spec Standard A1 virtual machine has a baseline rating of 100 and all other machines are scored in comparison to that machine. A virtual machine size with a low number offers low compute potential and a machine with a higher number offers more horsepower.

Note that some scores are marked with an asterisk (*); this represents a virtual machine that is enhanced using Intel Turbo technology to boost performance. The results from the machine can vary depending on the machine size, the task being done, and other workloads also running on the machine.

Choosing a Virtual Machine Series

Browse to the HPE or Dell sites and have a look at the standard range of rack servers. You will find DL360’s, R420’s, DL380’s, R730’s, and so on. Each of these is a series of machines. Within each series, you will find a range of pre-set sizes. Once you select a series, you find the size that suits your workload and the per hour price (which is charged per minute) of running that machine is listed. Let’s take a look at the different series of Azure virtual machines. Please remember that not all of the series are in all regions.

Virtual Machine Versioning

In the server world, Dell replaced the R720 with an R730 and we stopped buying R720s and starting buying R730s. HPE replaced the DL380 G6 with a DL380 G7 (and then a Gen 8) and we stopped buying the older machine and started buying the newer machine.

The same thing happens in Azure. As the underlying Azure fabric improves, Microsoft occasionally releases a new version of a series. For example, the D_v2-Series replaced the D-Series. The Standard A_v2-Series replaced the Standard A-Series.

The older series is still available but it usually makes sense to adopt the newer series. Late in 2016, Microsoft changed pricing so that the newer series was normally more affordable than the older one.

If you are reading this post, then you are deploying new services/machines and should be using the latest version of a selected series. I will not detail older/succeeded series of machines in this article.

Virtual Machine Specializations

A “code” is normally used in the name of a virtual machine to denote a special feature in that virtual machine size. Examples of such codes are:

  • S: The virtual machine supports Premium Storage (SSD) as well as Standard Storage (HDD – the normal storage used). Note that the S variant is usually the same price as the non-S variant.
  • M: The size in question offers more memory (RAM) than usual.
  • R: An additional Remote Direct Memory Access (RDMA) NIC is added to the virtual machine, offering high bandwidth, low latency, and low CPU impact data transfers.

A-Series Basic (ACU Score Not Available)

A is the start of the alphabet and this is the entry-level virtual machine.

This is the lowest and cheapest series of machines in Azure. The A-Series (Basic and Standard) uses a simulated AMD Opteron 4171 HE 2.1GHz virtual processors. This AMD processor was designed for core-scale with efficient electricity usage, rather than for horsepower, so it’s fine for lighter loads like small web servers, domain controllers, and file servers.

The Basic A-Series machines have some limitations:

  • Data disks are limited to 300 IOPS each.
  • You cannot load balance Basic A-Series machines. This means you cannot use NAT in an ARM/CSP deployment via an Azure load balancer.
  • Auto-Scale is not available.
  • The temp drive is based on HDD storage in the host. Everything outside of the A-Series Basic and Av2-Series Standard uses SSD storage for the temp drive.
  • Only Standard Storage is used. Everything outside of the A-Series Basic and Av2-Series Standard offers Premium Storage as an option if you select an “S” type, such as a DS_v2 virtual machine.

I like this series for domain controllers because my deployments are not affected by the above. It keeps the costs down.

A_v2-Series Standard (ACU: 100)

This is the most common machine that I have encountered in Azure. Using the same hardware as the Basic A-Series, the Standard A_v2-Series has some differences:

  • Data disks are limited to 500IOPS, which is the norm for Standard Storage (HDD) accounts.
  • You can use Azure load balancing.
  • Auto-Scaling is available to you.

These are the machines I use the most. They are priced well and offer good entry-level worker performance. If I find that I need more performance, then I consider D_v2-Series or F-Series.

D_v2-Series (ACU: 210-250*)

When I think D-Series, I think “D for disk”.

The key feature of the D_v2-Series machine is disk performance; it can offer more throughput (Mbps) and speed (IOPS) than an F-Series virtual machine and because of this, is considered an excellent storage performance series for workloads such as databases.

Additional performance is possible because this is the first of the Azure machines to offer an Intel Xeon processor, the Intel Xeon E5-2673 v3 2.4GHz CPU, that can reach 3.1GHz using Intel Turbo Boost Technology 2.0.

The D_v2-Series also offers an “S” option, which supports Premium Storage. Microsoft recommends the DS_v2-Series for SQL Server workloads. And that has led to some of my customers asking questions when they get their first bill. Such a blanket spec generalization is unwise; some SQL workloads are fine with HDD storage and some will require SSD. If you need lots of IOPS, then Premium Storage is the way to go. Don’t forget that you can aggregate Standard Storage data disks to get more IOPS.

F-Series (ACU: 210-250*)

This name reminds me of a pickup truck,and I think “all-rounder with great horsepower” when I think F-Series.

The F-Series uses the same Intel Xeon E5-2673 v3 2.4GHz CPU as the Dv-2 Series with the same 3.1GHz turbo boost, albeit with slightly lower disk performance. The major difference between the F-Series and the D_v2-Series is that the D_v2-Series focus on lots of RAM for each core (2 vCPUs and 7 GBRAM, for example) but the F-Series, which is intended for application/web server workloads, has a more balanced allocation (2 cores and 4GB RAM, for example).

I see the F-Series, which also has an “S” variant, as being the choice when you need something with more CPU performance than an A-Series machine, but not with as much focus on disk performance as the Dv2-Series.

D_v3-Series (ACU: 160 – 190)

Time to confuse things! I would normally not discuss older versions but there is a bit of an asterisk here. The D_v3, currently only available in some regions, is special. That is because this generation of virtual machines was the first to run on Windows Server 2016 hosts that have Intel Hyperthreading enabled on 2.3GHz Intel Xeon E5-2673 v4 processors and can reach 3.5GHz with Intel Turbo Boost Technology 2.0. This means that you get approximately 28 percent less CPU performance from a D_v3 virtual machine than you would from the equivalent D_v2 virtual machine. Microsoft compensates for this by making the D_v3 virtual machine cheaper by the same amount.

Like with the predecessor, this machine (also with “S” variants) is intended for database workloads. The CPU capacity might be lower, but often the number of threads for parallel process execution is more important with SQL Server, and this generation makes SQL Server more affordable.

E_v3-Series (ACU: 160 – 190)

The older D_v2-Series included several sizes with large amounts of memory that Microsoft referred to as “memory optimized”. The newer D_v3-Series does not include those large-RAM sizes; instead, a new series was created called the E_v3-Series, which also has “S” variants.

If you need something like the D_v3-Series but with larger amounts of RAM, then the E_v3-Series is the place to look. These machines run on the same hardware as the D_v3-Series but the virtual machines have larger sizes.

G-Series (ACU: 180 – 240*)

G is for Goliath.

The G-Series and it’s “S” variant virtual machines were once the biggest machines in the public cloud with up to 448GB RAM based on hosts with a 2.0GHz Intel Xeon E5-2698B v3 CPU. If you need a lot of memory, then these are the machines to choose.

The G-Series also offers more data disk capacity and performance than the (much) more affordable D_v2 or E_v3-Series.

M-Series (ACU: 160 – 180)

This series along with the D_v3 and the E_v3 is the latest to run on WS2016 hosts with Intel Hyperthreading. The trait of the M-Series is that these machines are massive with up to 128 cores and 2TB of RAM.

The G-Series offers better CPU performance than the M-Series but the M-Series can offer much more RAM.

N-Series (ACU Not Available)

The N name stands for NVIDIA and that is because the hosts will have NVIDIA chipsets, which are presented directly to the virtual machines using a new Hyper-V feature called Discrete Device Assignment (DDA).

There are two versions of the N-Series:

  • NV-Series: These machines run on hosts with NVIDIA’s Tesla M60 GPU and are suited for VDI (Citrix XenDesktop for Azure) and session-based computing (Remote Desktop Servics and Citrix XenApp for Azure) where graphics performance is important.
  • NC-Series: Hosts with the NVIDIA’s Tesla K80 card run virtual machines that are design for CUDA- and OpenCL workloads, such as scientific simulations, ray tracing, and more.
  • ND-Series: The NVIDIA Tesla P40 is perfect for “deep learning” – the term used instead of artificial intelligence to allay fears of skull-crushing chrome androids.

H-Series (ACU: 290 – 300*)

The H is for (SAP) Hana or high-performance computing (HPC).

The H-Series virtual machines are sized with large numbers of cores and run on hardware with Intel Haswell E5-2667 V3 3.2 GHz processors and DDR4 RAM.

There are two core scenarios for the H-Series:

  • HPC: If you need burst capacity for processing lots of data, then the H-Series is perfect for you. R variants offer 56Gbps Infiniband networking for moving data around quickly when doing massive parallel computing.
  • SAP Hana: The H-Series appears to be the recommended series for running large enterprise workloads.

Ls-Series (ACU: 180 – 240*)

If you want low-latency storage, then the L-Series is for you.

We don’t have too much detail but it appears that the Ls-Series (often referred to as the L-Series) runs on the same host hardware as the G-Series. The focus isn’t on scale as it is on the G-Series. The focus is on low-latency storage.

The virtual machines of the Ls-Series use the local SSD storage of the hosts for data disks. This offers a much faster read time than can be achieved with Premium Storage (SSD storage that is shared by many hosts across a network, much like a flash-based SAN).

This is a very niche machine series; it is expected that this type will be used for workloads where storage latency must be as low as possible, such as NoSQL.

B-Series (ACU Not Available)

“B is for burst”. This is a strange series of virtual machines, which were still in limited-access preview at the time of writing.

These very low-cost machines are limited to a small percentage of their CPU potential. By under-utilizing CPU, the machines earn credits, which can then be used to burst beyond its normal limits in times of stress. The bank of credits for each machine is limited depending on the size of the machine, which prevents one from earning credits for 11 months and going crazy for 1 month.

Sponsored

Microsoft states that the B-Series has at least the same processor as the D_v2 series. This means that when the processor bursts, it can offer genuine power. But most of the time, the application will either be idle or constrained. I suspect that only those that know they have a “bursty” workload will use the B-Series in production and early adoption will mostly be trying it out. However, those that use virtual machine performance metrics for non-B-Series machines will be able to identify B-Series candidates, change their series/size, and save a lot of money.

The post Choosing an Azure Virtual Machine – September 2017 appeared first on Petri.

The UK gets its first ocean-cleaning ‘Seabin’

The content below is taken from the original (The UK gets its first ocean-cleaning ‘Seabin’), to continue reading please visit the site. Remember to respect the Author & Copyright.

It’s no secret that the world’s oceans are full of floating waste. Things like plastic not only pollute the natural ecosystem, but pose a very real threat to its inhabitants. Back in December 2015, we were first introduced to the concept of the Seabin, a floating natural fiber garbage bin that can suck in pollutants in docks and marinas and leave the water pristine.

Its creators needed help making the concept a reality, taking to Indiegogo to raise enough funds to deploy their marine disposal units all over the world. With over $260,000 in their pocket, two Australian surfers, Pete Ceglinski and Andrew Turton, have today embarked on that journey, installing the world’s first production Seabin in Portsmouth (UK) harbour.

The Times reports that the Seabin has been installed near the base of the Land Rover Ben Ainslie Racing (BAR) team. The group is typically known for its attempting to bring sailing’s most prestigious prize — the America’s Cup — back to Britain, but it’s also keen to reduce its environmental impact while doing so. The team has already committed to not eating meat on a Monday, only sources sustainable seafood and will now oversee the Seabin as it filters around the protected cages of over 1,000 oysters located near its pontoon.

The Seabin’s creators say that each unit can collect around 1.5kg of waste a day and hold up to 12kg until it’s full. That amounts to 20,000 plastic bottles or 83,000 plastic bags a year. It houses a combination of a large natural fibre net and a dock-based pump (fed by the hook-like metal pole). This only collects debris floating on top of the water and sucks in surface oils, ensuring fish are safe.

Plenty of other places are trialling the Seabin, including Spain’s Port Adriano and the Port of Helsinki (Finland). They will officially go on sale in "early November," costing around £3,000 ($3,957).

Source: The Times

Put your databases on autopilot with a lift and shift to Azure SQL Database

The content below is taken from the original (Put your databases on autopilot with a lift and shift to Azure SQL Database), to continue reading please visit the site. Remember to respect the Author & Copyright.

The sheer volume of data generated today and the number of apps and databases across enterprises is staggering. To stay competitive and get ahead in today’s marketplace, IT organizations are always looking at ways to optimize how they maintain and use the data that drives their operations. Faced with constant demands for more scale and reliability amid the ongoing threat of cybersecurity attacks, IT organizations can quickly stretch their staffing and infrastructure to the breaking point. In addition to these operational issues, businesses need to look at how to best harness their data to build better apps and fuel future growth. Organizations are increasingly looking for ways to automate basic database administration tasks, from daily management to performance optimization with best in class AI-driven intelligent PaaS capabilities. Azure SQL Database is the perfect choice to deliver the right mix of operational efficiencies, optimized for performance and cost, enabling you to focus on business enablement to accelerate growth and innovation.

Azure SQL Database helps IT organizations accelerate efficiencies and drive greater innovation. With built-in intelligence based on advanced machine learning technology, it is a fully-managed relational cloud database service that’s designed for SQL Server databases, built to maximize application performance and minimize the costs of running a large data estate. The latest world-class SQL Server features are available to your applications, like in-memory technologies that provide up to 30x improved throughput and latency and 100x perf improvement on your queries over legacy SQL Server editions. As a fully-managed PaaS service, SQL Database assumes much of the daily administration and maintenance of your databases, including the ability to scale up resources with near-zero downtime. This extends to ensuring business continuity with features like point-in-time restore and active geo-replication that help you minimize data loss with an RPO of less than 5 seconds. And, it’s supported by a financially-backed 99.99% SLA commitment. The benefits of a fully-managed SQL Database led IDC to estimate up to a 406% ROI over on-premises and hosted alternatives, making it an economical choice for your data. 

DocuSign, the global standard for eSignature and digital transaction management (DTM), wanted to scale quickly into other international markets and chose Microsoft Azure as its preferred cloud services platform. Partnering with Microsoft meant combining the best of what DocuSign does in its data center, reliable SQL Servers on flash storage, with the best of what Azure could bring to it: a global footprint, high scalability, rapid deployment and deep experience managing SQL at scale. Check out this video to learn more about DocuSign’s experience.

The right option for your workload

When considering a move to the cloud, SQL Database provides three different deployment options for your data, providing you with a range of performance levels and storage choices to suit your needs. 

SQL database PaaS

 

Single databases are assigned a certain amount of resources via Basic, Standard, and Premium performance tiers. They focus on a simplified database-scoped programming model and is best for applications with a predictable pattern and relatively stable workload.

Elastic pools are unique to SQL Database. While they, too, have Basic, Standard and Premium performance tiers, pools are a shared resource model that enables higher resource utilization efficiency. This means that all the databases within an elastic pool share predefined resources within the same pool. Like single databases, elastic pools focus on a simplified database-scoped programming model for multi-tenant SaaS apps and are best for workload patterns that are well-defined. It’s highly cost-effective in multi-tenant scenarios.

We recently announced the upcoming fall preview for SQL Database Managed Instance, the newest deployment option in SQL Database, alongside single databases and elastic pools.

“Lift and shift” your data to the cloud 

Whereas single and elastic databases focus on a simplified database-scoped programming model, SQL Database Managed Instance provides an instance-scoped programming model that is modeled after and therefore highly compatible with on-premises SQL Server 2005 and newer, enabling a database lift-and-shift to a fully-managed PaaS, reducing or eliminating the need to re-architect your apps and manage them once in the cloud.

With SQL Database Managed Instance, you can continue to rely on the tools you have known and loved for years, in the cloud, too. This includes features such as SQL Agent, Service Broker and Common Language Runtime (CLR). But, you can also benefit from using new cloud concepts to enhance your security and business continuity to levels you have never experienced before, with minimal effort.  For example, you can use SQL Audit exactly as you always have, but now with the ability to run Threat Detection on top of that, you can proactively receive alerts around malicious activities instead of simply reacting to them after the fact.

We understand our enterprise customers and their security concerns and are thus introducing VNET support with private IP addresses and VPN to on-premises networks to SQL Database Managed Instance, thus enabling full workload isolation. You can now get the benefits of the public cloud while keeping your environment isolated from the public Internet. Just as SQL Server has been the most secure database server for years, we’re doing the same in the cloud.

For organizations looking to migrate hundreds or thousands of SQL Server databases from on-premise or IaaS, self-built or ISV provided, with as low effort as possible, Managed Instance provides a simple, secure and economical path to modernization.

How SQL Database Managed Instance works

Managed Instance is built on the same infrastructure that’s been running millions of databases and billions of transactions daily in Azure SQL Database, over the last several years. The same mechanisms for automatic backups, high availability and security are used for Managed Instance. The key difference is that the new offering exposes entire SQL instances to customers, instead of individual databases. On the Managed Instance, all databases within the instance are located on the same SQL Server instance under the hood, just like on an on-premises SQL Server instance. This guarantees that all instance-scoped functionality will work the same way, such as global temp tables, cross-database queries, SQL Agent, etc. This database placement is kept through automatic failovers, and all server level objects, such as logins or SQL Agent logins, are properly replicated.

Multiple Managed Instances can be placed into a so-called virtual cluster, which can then be placed into the customer’s VNET, as a customer specified subnet, and sealed off from the public internet. The virtual clusters enable scenarios such as cross-instance queries (also known as linked servers), and Service Broker messaging between different instances. Both the virtual clusters and the instances within them are dedicated to a particular customer, and are isolated from other customers, which greatly helps relax some of the common public cloud concerns.

How SQL database managed

The easiest path to SQL Database Managed Instance

The new Azure Database Migration Service (ADMS) is an intelligent, fully managed, first party Azure service that enables seamless and frictionless migrations from heterogeneous database sources to Azure Database platforms with only a few minutes of downtime. This service will streamline the tasks required to move existing 3rd party and SQL Server databases to Azure.

Maximize your on-premises license investments

The Azure Hybrid Benefit for SQL Server is an Azure-based benefit that helps customers maximize the value of their current on-premises licensing investments to pay a discounted rate on SQL Database Managed Instance. If you are a SQL Server Enterprise Edition or Standard Edition customer and you have Software Assurance, the Azure Hybrid Benefit for SQL Server can help you save up to 30% on Managed Instance.

Making SQL Database the best and most economical destination for your data

Running your data estate on Azure SQL Database is like putting it on autopilot: we take care of the day to day tasks, so you can focus on advancing your business. Azure SQL Database, a fully-managed, intelligent relational database service, delivers predictable performance at multiple service levels that provide dynamic scalability with minimal or no downtime, built-in intelligent optimization, global scalability and availability, and advanced security options — all with near-zero administration. These capabilities allow you to focus on rapid app development and accelerating your time to market, rather than allocating precious time and resources to managing virtual machines and infrastructure. New migration services and benefits can accelerate your modernization to the cloud and further reduce your total cost of ownership, making Azure SQL Database the best and most economical place to run SQL Server workloads.

We are so excited that these previews are almost here and look forward to hearing from you and helping you accelerate your business goals, and drive great efficiencies and innovation across your organization.

Nine reasons for having multiple bikes (clogging up the hallway)

The content below is taken from the original (Nine reasons for having multiple bikes (clogging up the hallway)), to continue reading please visit the site. Remember to respect the Author & Copyright.

Walking into the home of a bike hoarding cyclist can cause anyone to catch their breath – fellow bike lovers admiring the beauty, and non-cyclists pausing to re-evaluate the sanity of the individual.

Over time, and without the possession of a garage (simply not safe these days, anyway), the home can begin to look like a jungle of steel and aluminium – once essential components making homes on the branches of leather coated handlebars and on upturned saddles.

The once joyous N+1 rule (ideal number of bikes is always the quantity you have, +1) can leave cyclists tortured when homes reach capacity, and a one in, one out law comes into enforcement.

Don’t let it happen to you. Here are some handy reasons to help you explain the need to keep all those extra bikes…

Couldn’t DO bike riding without them

Specialized Red Hook

Can’t ride track without a (limited edition Specialized Allez Sprint) track bike

This is the standard ‘first port of call’. Move on to the others if it fails.

You need a road bike. And then a winter version, so the ‘precious’ doesn’t get ruined. You can’t race time trials without a TT bike, ‘cross without a cyclocross bike, track without a track bike and you need a shopper with a hub gear and luggage for errands and commuting.

Just don’t let anyone on the receiving end find out about the elusive adventure bike which can tick off at least three of those.

There’s history in every frame

Steve Bauer testing the strength of carbon. Credit: Graham Watson

Maybe it’s a Look KG86 which heralds back to the first ever usable carbon tubes, or a Cervélo Soloist which was aero before anyone else really knew how to slice through the air.

>>> The top 10 revolutionary road bikes that changed the world of cycling

These bikes are worthy of a museum. You’re just safeguarding them whilst the museum owners set about tracking you down.

Sentimental value

C’mon, don’t make me get rid of the Chopper!

The first adult bike, the one which revived you from couch potato to segment slayer. Or perhaps it’s your first race bike, the one which introduced you to the heady mix of adrenaline tinged with a little edge of fear, all to be replaced by endorphins at the finish line.

Parting with a bike which carries sentimental value would take away a little part of your soul. And no one wants that (do they?)

They’ll be worth money one day

Davis Phinney (Watson) and a cash register (Kroton/CC)

When the velocipede museum gets in touch. Not for the Colnago Master you’ve got hidden in the now defunct airing cupboard – but for the bike that started the career of the triple Olympic legend…. (you can wake up, now).

Need to keep the one hanging on your wall sparkly

Image: Hiplok

Image: Hiplok

With all these fancy wall hanging bike storage devices around, you can turn bikes into art. And that art needs to be clean and sparkly. So you’ll be needing spares to actually ride.

You need a turbo bike

Everyone needs a bike for the turbo trainer

You see, to avoid wearing out the tyres, you need to fit a turbo tyre. And it would be a massive faff to swap tyres between indoor and outdoor rides. Plus, having a dedicated bike set up on the turbo trainer means it’s ready to go at all times – and it’s comparatively much cheaper than forking out for a gym membership or buying an exercise bike.

Bikes are excellent clothes dryers

Bikes > dryers

It’s almost like handlebars where MEANT for hanging base layers and bib shorts from. If that’s not a space saver, we don’t know what is.

You don’t want to get involved in ‘chuck away culture’

The best bike is in there somewhere

It’s awful, this nasty landfill culture we live in these days. When something isn’t ‘perfect’ people just throw it away, and buy a new one. Well – you’re taking a stand. You just keep the old one, and buy a new one as well. But you’ve every intention of making do and mending the retired bikes – they’re ALL your next commuter.

Everyone needs a bike for two…

Dawes Galaxy Twin 2017 Tandem Bike

Dawes Galaxy Twin 2017 Tandem Bike

With all of those brilliantly explained justifications – surely your fellow homie is completely convinced. So much so, it’s time you invested in a NEW bike, which you can ride together.