3D Printed Valves Are Saving Lives In Italy

The content below is taken from the original ( 3D Printed Valves Are Saving Lives In Italy), to continue reading please visit the site. Remember to respect the Author & Copyright.

We’ve been seeing this story pop up all over the web. A group in Italy has been 3D printing valves that are saving people’s lives. In this case, the valve is a mixing device that is combining room air with pure oxygen before it is delivered to the patient. The […]

Read more on MAKE

The post 3D Printed Valves Are Saving Lives In Italy appeared first on Make: DIY Projects and Ideas for Makers.

Using Cloudflare Gateway to Stay Productive (and turn off distractions) While Working Remotely

The content below is taken from the original ( Using Cloudflare Gateway to Stay Productive (and turn off distractions) While Working Remotely), to continue reading please visit the site. Remember to respect the Author & Copyright.

Using Cloudflare Gateway to Stay Productive (and turn off distractions) While Working Remotely

This week, like many of you reading this article, I am working from home. I don’t know about you, but I’ve found it hard to stay focused when the Internet is full of news related to the coronavirus.

CNN. Twitter. Fox News. It doesn’t matter where you look, everyone is vying for your attention. It’s totally riveting…

… and it’s really hard not to get distracted.

It got me annoyed enough that I decided to do something about it. Using Cloudflare’s new product, Cloudflare Gateway, I removed all the online distractions I normally get snared by — at least during working hours.

This blog post isn’t very long, but that’s a function of how easy it is to get Gateway up and running!

Getting Started

To get started, you’ll want to set up Gateway under your Cloudflare account. Head to the Cloudflare for Teams dashboard to set it up for free (if you don’t already have a Cloudflare account, hit the ‘Sign up’ button beneath the login form).

If you are using Gateway for the first time, the dashboard will take you through an onboarding experience:

Using Cloudflare Gateway to Stay Productive (and turn off distractions) While Working Remotely

The onboarding flow will help you set up your first location. A location is usually a physical entity like your home, office, store or a data center.

When you are setting up your location, the dashboard will automatically identify your IP address and create a location using that IP. Gateway will associate requests from your router or device by matching requests with your location by using the linked IP address of your location (for an IPv4 network). If you are curious, you can read more about how Gateway determines your location here.

Before you complete the setup you will have to change your router’s DNS settings by removing the existing DNS resolvers and adding Cloudflare Gateway’s recursive DNS resolvers:

  • 172.64.36.1
  • 172.64.36.2

How you configure your DNS settings may vary by router or a device, so we created a page to show you how to change DNS settings for different devices.

You can also watch this video to learn how to setup Gateway:

Deep Work

Next up, in the dashboard, I am going to go to my policies and create a policy that will block my access to distracting sites. You can call your policy anything you want, but I am going to call mine “Deep work.”

Using Cloudflare Gateway to Stay Productive (and turn off distractions) While Working Remotely

And I will add a few websites that I don’t want to get distracted by, like CNN, Fox News and Twitter.

Using Cloudflare Gateway to Stay Productive (and turn off distractions) While Working Remotely

After I add the domains, I hit Save.

If you find the prospect of blocking all of these websites cumbersome, you can use category-based DNS filtering to block all domains that are associated with a category (‘Content categories’ have limited capabilities on Gateway’s free tier).

Using Cloudflare Gateway to Stay Productive (and turn off distractions) While Working Remotely

So if I select Sports, all websites that are related to Sports will now be blocked by Gateway. This will take most people a few minutes to complete.

And once you set the rules by hitting ‘Save’, it will take just seconds for the selected policies to propagate across all of Cloudflare’s data centers, spread across more than 200 cities around the world.

How can I test if Gateway is blocking the websites?

If you now try to go to one of the blocked websites, you will see the following page on your browser:

Using Cloudflare Gateway to Stay Productive (and turn off distractions) While Working Remotely

Cloudflare Gateway is letting your browser know that the website you blocked is unreachable. You can also test if Gateway is working by using dig or nslookup on your machine:

Using Cloudflare Gateway to Stay Productive (and turn off distractions) While Working Remotely

If a domain is blocked, you will see the following in the DNS response status: REFUSED.

This means that the policy you created is working!

And once working hours are over, it’s back to being glued to the latest news.

If you’d rather watch this in video format, here’s one I recorded earlier:

And to everyone dealing with the challenges of COVID-19 and working from home — stay safe!

ROUGOL looks at RISC OS Direct – Monday, 16th March

The content below is taken from the original ( ROUGOL looks at RISC OS Direct – Monday, 16th March), to continue reading please visit the site. Remember to respect the Author & Copyright.

Although it’s really aimed at new users, anyone can download and use it, so With the launch of the new distribution of RISC OS for the Raspberry Pi, called RISC OS Direct, existing users of the operating system may be wondering what’s in it and whether or not they should consider switching to it for […]

UK lays out plans for legal e-scooters, medical drones and more transportation innovation in test cities

The content below is taken from the original ( UK lays out plans for legal e-scooters, medical drones and more transportation innovation in test cities), to continue reading please visit the site. Remember to respect the Author & Copyright.

Electric scooters are still unlawful to use on public roads and pavements in the UK, but that hasn’t stopped many consumers from using them anyway to get from A to B. Now, in an effort to wean people off the use of individual automobiles, the government may finally be coming around to bringing its rules up to speed with the times, moving one step closer to legally using e-scooters alongside other new mobility technology, such as drone deliveries for medical supplies, in the coming years.

The UK’s Department for Transport today announced a new consultation into exploring new transportation modes that include e-scooters and e-cargo bikes, as well as bringing the on-demand model (popularised by services like Uber) to buses and other public transport alternatives, and using drones for medical deliveries. Alongside this, it announced funding of £90 million ($112 million) for three new Future Transport Zones to trial these new services.

Together, the moves represent some of the more significant headway that the UK has made in recent years to work with and consider what transportation will look like in the country in the years ahead, in particular as an alternative to consumers using private vehicles to move things and getting around.

Some argue that the UK has lagged behind other European countries like France when it comes to bringing e-scooters to the wider market, with up to now the only legal services operating in closed “campus” environments.

“We are on the cusp of a transport revolution. Emerging technologies are ripping up the rulebook and changing the way people and goods move forever,” said Transport Secretary Grant Shapps, in a statement. “Our groundbreaking Future of Transport programme marks the biggest review of transport laws in a generation and will pave the way for exciting new transport technology to be tested, cementing the UK’s position as a world-leading innovator. This Review will ensure we understand the potential impacts of a wide range of new transport types such as e-scooters, helping to properly inform any decisions on legalisation. Funding these new Zones across the country will also help us safely test innovative ways to get around, creating a greener future transport system for us all.”

Generally speaking, the announcement is an overdue, but clear, vote of confidence in the idea of trying out new kinds of services and models, in the wake of a number of them not living up to expectations. Bird, for example, introduced an e-scooter trial in London two years ago, but with a very limited range and scope, in the Olympic Park campus in London, it’s had little exposure in the wider market. Citymapper last year, meanwhile, shut down its on-demand bus trials after finding they also didn’t work as the startup had hoped they would. (It’s also an interesting turn for the government, which took a hands-off approach to initial Uber’s roll out, only to see the company run into controversy; perhaps learning from that, it seems now to be more engaged in how new services and technologies roll out.)

The news today essentially gives a lease of life to companies hoping to build businesses on these new technologies and services.

The DfT is short on details around what the consultation will entail but did include some specifics on scooters, in what would be the government’s first concerted efforts to consider how what requirements would need to be introduced to legalise e-scooters, including traffic laws, minimum age and vehicle requirements, insurance requirements and parking rules (parking fees being a key revenue driver for local councils).

(The backstory here is that scooters, which are counted as motorised vehicles in the UK, are still illegal because regulations around insurance, traffic laws and driver requirements, have been determined for them, and so even to test new services, the laws will need to be amended. The DfT said that local authorities will contract one or more e-scooter companies to run services.)

“This is great news for UK towns and cities, we’re delighted that the Government is exploring offering greener ways to travel,” said Alan Clarke, Director of UK Policy and Government Affairs at Lime, in a statement. (Lime currently offers bikes on demand in various locations, but has yet to bring its scooters to the UK market.) “Shared electric scooters are a safe, emission-free, affordable and convenient way of getting around. They help take cars off the road with around a quarter of e-scooter trips replacing a car journey — cutting congestion and reducing air pollution. Lime operates shared dockless e-scooter schemes in over 100 locations globally and in 50 cities across Europe. We look forward to contributing to the government’s call for evidence to develop clear rules and minimum safety standards to allow this environmentally friendly option to be made available and hope to participate in upcoming trials on UK streets.”

The new transport zones — in Portsmouth and Southampton, the West of England Combined Authority, and Derby and Nottingham — will be modelled on an existing region established in the West Midlands (covering Birmingham, Coventry and Solihull), which has been a testing ground for future transport policy and technology such as autonomous vehicles.

The Zones will provide real-world testing for experts, allowing them to work with a range of local bodies such as councils, hospitals, airports and universities to test innovative ways to transport people and goods,” the DfT said in a statement.

As with the existing region, the new ones will explore autonomous vehicle trials, as well as scooter pilots, bus schemes that operate on on-demand models, and multi-modal transportation apps. Portsmouth and Southampton will also look at last-mile deliveries using e-cargo bikes and medical supply drones. Derby and Nottingham have been granted £15 million to build mobility hubs to promote different public transportation options alongside bike hire, car clubs and electric vehicles. 

List of free offers to help getting remote workers set up.

The content below is taken from the original ( in /r/ msp), to continue reading please visit the site. Remember to respect the Author & Copyright.

Airbnb and three other P2P rental platforms agree to share limited pan-EU data

The content below is taken from the original ( Airbnb and three other P2P rental platforms agree to share limited pan-EU data), to continue reading please visit the site. Remember to respect the Author & Copyright.

The European Commission announced yesterday it’s reached a data-sharing agreement with vacation rental platforms Airbnb, Booking.com, Expedia Group and Tripadvisor — trumpeting the arrangement as a “landmark agreement” which will allow the EU’s statistical office to publish data on short-stay accommodations offered via these platforms across the bloc.

It said it wants to encourage “balanced” development of peer-to-peer rentals, noting concerns have been raised across the EU that such platforms are putting unsustainable pressure on local communities.

It expects Eurostat will be able to publish the first statistics in the second half of this year.

“Tourism is a key economic activity in Europe. Short-term accommodation rentals offer convenient solutions for tourists and new sources of revenue for people. At the same time, there are concerns about impact on local communities,” said Thierry Breton, the EU commissioner responsible for the internal market, in a statement.

“For the first time, time we are gaining reliable data that will inform our ongoing discussions with cities across Europe on how to address this new reality in a balanced manner. The Commission will continue to support the great opportunities of the collaborative economy, while helping local communities address the challenges posed by these rapid changes.”

Per the Commission’s press release, data that will be shared with Eurostat on an ongoing basis includes number of nights booked and number of guests, which will be aggregated at the level of “municipalities.” “municipalities”.

“The data provided by the platforms will undergo statistical validation and be aggregated by Eurostat,” the Commission writes. “Eurostat will publish data for all Member States as well as many individual regions and cities by combining the information obtained from the platforms.”

We asked the Commission if any other data would be shared by the platforms — including aggregated information on the number of properties rented; and whether rentals are whole properties or rooms in a lived in property — but a Commission spokeswoman could not confirm any other data would be provided under the current arrangement. Update: It has now confirmed a second phase will involve more data being published — see below for details.

She also told us that municipalities can be defined differently across the EU, EU — so it may not always be the case that city-level data will be available to be published by Eurostat.

In recent years, years multiple cities in the EU — including Amsterdam, Barcelona, Berlin and Paris — have sought to put restrictions on Airbnb and similar platforms in order to limit their impact on residents and local communities, arguing short-term short term rentals remove housing stock and drive up rental prices, hollowing out local communities and communities, as well as creating other sorts of anti-social disruptions.

However, However a ruling in December by Europe’s top court — related to a legal challenge filed against Airbnb by a French tourism association — offered the opposite of relief for such cities, with judges finding Airbnb to be an online intermediation service, rather than an estate agent.

Under current EU law on Internet business, the CJEU ruling makes it harder for cities to apply tighter restrictions as such services remain regulated under existing ecommerce rules. Although the Commission has said it will introduce a Digital Services Act this year that’s slated to upgrade liability rules for platforms (and at least could rework aspects of the ecommerce directive to allow for tighter controls).

Last year, ahead of the CJEU’s ruling, ten EU cities penned an open letter warning that “a carte blanche for holiday rental platforms is not the solution,” solution” — calling for the Commission to introduce “strong legal obligations for platforms to cooperate with us in registration-schemes and in supplying rental-data per house that is advertised on their platforms.” platforms”.

So it’s notable the Commission’s initial data-sharing arrangement with the four platforms does not include any information about the number or properties being rented, nor the proportion which are whole property rentals vs rooms in a lived in property — both property.Both of which would be highly relevant metrics for cities concerned about short-term short term rental platforms’ impact on local housing stock and rents.

When asked about this, Asked about this the Commission spokeswoman told us it had to “ensure a fair balance between the transparency that will help the cities to develop their policies better and then to protect personal data, data — because this is about private houses.” houses”.

“The decision was taken on this basis to strike a fair balance between the different interests at stake,” she added.

When we pointed out that it would be possible to receive property data in aggregate, in a way that does not disclose any personal data, the spokeswoman had no immediate response. (We’ll update this report if we receive any additional comment from the Commission on our questions).

Without pushing for more granular data from platforms, platforms the Commission initiative looks like it will achieve only a relatively superficial level of transparency in the first instance — and one which might best suit platforms’ interests by spotlighting attention on tourist dollars generated in particular regions rather than offering data to allow for cities to drill down into flip-side impacts on local housing and rent affordability.

Gemma Galdon, director of a Barcelona-based research consultancy, called Eticas, which focuses on the ethics of applying cutting edge technologies, agreed the Commission move appears to fall falls short — though she welcomed the move towards increasing transparency as “a good step.” step”.

“This is indeed disappointing. Cities like Barcelona, NYC, Portland or Amsterdam have agreements with Airbnb airbnb to access data (even personal and contact data for hosts!),” she told us.

“Mentioning privacy as a reason not to provide more data shows a serious lack of understanding of data protection regulation in Europe. Aggregate data is not personal data. And still, personal data can be shared as long as there is a legal basis or consent,” she added.

“So while this is a good step, it is unclear why it falls so short as the reason provided (privacy) is clearly not relevant in this case.”

Update: We understand a second phase of the Commission’s data-sharing arrangement will see Eurostat also publish data on the number of properties rented and the proportion which are full property rentals vs rooms in occupied properties. However it intends to wait until it’s confident it can accurately exclude double-counting of the same rental properties advertised on different platforms.

In the first phase Eurostat will publish the following occupancy data: Number of stays (number of rentals of each listing during the reference period); number of nights rented out (number of nights each listing was rented out during the reference period); and number of overnight stays (number of guest nights spent at each listing during the reference period (based on number of guests indicated when booking was made).

In a second phase, which does not yet have a timeline attached to it, the following capacity data will also be published: Number of hosts (number of hosts renting out one or more listings); number of listings; and number of bed places (best possible approximation based on e.g. maximum capacity shown for each listing). This second phase will also include data on the type of accommodation (i.e. individual room in a shared property vs entire property).

Farewell SETI@Home

The content below is taken from the original ( Farewell SETI@Home), to continue reading please visit the site. Remember to respect the Author & Copyright.

It was about 21 years ago that Berkley started one of the first projects that would allow you to donate idle computing time to scientific research. In particular, your computer could help crunch data from radio telescopes looking for extraterrestrial life. Want to help? You may be too late. The project is going into hibernation while they focus on analyzing data already processed.

According to the home page:

We’re doing this for two reasons:

1) Scientifically, we’re at the point of diminishing returns; basically, we’ve analyzed all the data we need for now.

2) It’s a lot of work for us to manage the distributed processing of data. We need to focus on completing the back-end analysis of the results we already have, and writing this up in a scientific journal paper.

So what do you think? Maybe they found ET and just don’t want to announce it too soon. Or maybe the cost of GPU-based supercomputers is now so low that it really doesn’t make sense to send jobs all over the Internet. Maybe everyone who used to donate is mining Bitcoin now? Or maybe they just really analyzed all their data. But what fun is that?

On the other hand, there are still other projects around that do distributed processing, most of them built on the Berkley framework BOINC. Folding@Home just started up a coronavirus program, for instance. If you’d rather do something more personal as a citizen scientist, you can join the zoo.

Microsoft’s Bill Gates defrag is finally virtually complete: Billionaire quits board to double down on philanthropy

The content below is taken from the original ( Microsoft’s Bill Gates defrag is finally virtually complete: Billionaire quits board to double down on philanthropy), to continue reading please visit the site. Remember to respect the Author & Copyright.

You look like you have coronavirus, can I help you with that?

Nearly 45 years to the day after founding Microsoft, Bill Gates today finally stepped down from the board to devote his time to dealing with global health issues and climate change.…

How Cloudflare keeps employees productive from any location

The content below is taken from the original ( How Cloudflare keeps employees productive from any location), to continue reading please visit the site. Remember to respect the Author & Copyright.

How Cloudflare keeps employees productive from any location How Cloudflare keeps employees productive from any location

Cloudflare employs more than 1,200 people in 13 different offices and maintains a network that operates in 200 cities. To do that, we used to suffer through a traditional corporate VPN that backhauled traffic through a physical VPN appliance. It was, frankly, horrible to work with as a user or IT person.

With today’s mix of on-prem, public cloud and SaaS and a workforce that needs to work from anywhere, be it a coffee shop or home, that model is no longer sustainable. As we grew in headcount, we were spending too much time resolving VPN helpdesk tickets. As offices around the world opened, we could not ask our workforce to sit as every connection had to go back through a central location.

We also had need to be ready to scale. Some organizations are currently scrambling to load test their own VPN in the event that their entire workforce needs to work remotely during the COVID-19 outbreak. We could not let a single physical appliance constrain our ability to deliver 26M Internet properties to audiences around the world.

To run a network like Cloudflare, we needed to use Cloudflare’s network to stay fast and secure.

We built Cloudflare Access, part of Cloudflare for Teams, as an internal project several years ago to start replacing our VPN with a faster, safer, alternative that made internal applications, no matter where they live ,seamless for our users.

To address the scale challenge, we built Cloudflare Access to run on Workers, Cloudflare’s serverless platform. Each data center in the Cloudflare network becomes a comprehensive identity proxy node, giving us the scale to stay productive from any location – and to do it for our customers as well.

Over the last two years, we’ve continued to expand its feature set by prioritizing the use cases we had to address to remove our reliance on a VPN. We’re excited to help customers stay online and productive with the same tools and services we use to run Cloudflare.

How does Cloudflare Access work?

Cloudflare Access is one-half of Cloudflare for Teams, a security platform that runs on Cloudflare’s network and focuses on keeping users, devices, and data safe without compromising experience or  performance. We built Cloudflare Access to solve our own headaches with private networks as we grew from a team concentrated in a single office to a globally distributed organization.

How Cloudflare keeps employees productive from any location

Cloudflare Access replaces corporate VPNs with Cloudflare’s network. Instead of placing internal tools on a private network, teams deploy them in any environment, including hybrid or multi-cloud models, and secure them consistently with Cloudflare’s network.

Administrators build rules to decide who should be able to reach the tools protected by Access. In turn, when users need to connect to those tools, they are prompted to authenticate with their team’s identity provider. Cloudflare Access checks their login against the list of allowed users and, if permitted, allows the request to proceed.

Deploying Access does not require exposing new holes in corporate firewalls. Teams connect their resources through a secure outbound connection, Argo Tunnel, which runs in your infrastructure to connect the applications and machines to Cloudflare. That tunnel makes outbound-only calls to the Cloudflare network and organizations can replace complex firewall rules with just one: disable all inbound connections.

To defend against attackers addressing IPs directly, Argo Tunnel can help secure the interface and force outbound requests through Cloudflare Access. With Argo Tunnel, and firewall rules preventing inbound traffic, no request can reach those IPs without first hitting Cloudflare, where Access can evaluate the request for authentication.

Administrators then build rules to decide who should authenticate to and reach the tools protected by Access. Whether those resources are virtual machines powering business operations or internal web applications, like Jira or iManage, when a user needs to connect, they pass through Cloudflare first.

When users need to connect to the tools behind Access, they are prompted to authenticate with their team’s SSO and, if valid, instantly connected to the application without being slowed down. Internally managed apps suddenly feel like SaaS products, and the login experience is seamless and familiar.

Behind the scenes, every request made to those internal tools hits Cloudflare first where we enforce identity-based policies. Access evaluates and logs every request to those apps for identity, giving administrators more visibility and security than a traditional VPN.

Our team members SSO into the Atlassian suite with one-click

We rely on a set of productivity tools built by Atlassian, including Jira and Confluence. We secure them with Cloudflare Access.

In the past, when our team members wanted to reach those applications, they first logged into the VPN with a separate set of credentials unique to their VPN client. They navigated to one of the applications, and then broke out a second set of credentials, specific to the Atlassian suite, to reach Jira or Wiki.

All of this was clunky, reliant on the VPN, and not integrated with our SSO provider.

We decided to put the Atlassian suite behind Access and to build a plugin that could use the login from Access to SSO the end user into the application. Users login with their SSO provider and are instantly redirected into Jira or Wiki or Bitbucket, authorized without managing extra credentials.

We selected Atlassian because nearly every member of our global team uses the product each day. Saving the time to input a second set of credentials, daily, has real impact. Additionally, removing the extra step makes reaching these critical tools easier from mobile devices.

When we rolled this out at Cloudflare, team members had one fewer disruption in their day. We all became accustomed to it. We only received real feedback when we disabled it, briefly, to test a new release. And that response was loud. When we returned momentarily to the old world of multiple login flows, we started to appreciate just how convenient SSO is for a team. The lesson motivated us to make this available, quickly, to our customers.

You can read more about using our Atlassian plugin in your organization, check out the announcement here.

Our engineers can SSH to the resources they need

When we launched Cloudflare Access, we started with browser-based applications. We built a command-line tool to make CLI operations a bit easier, but SSH connections still held us back from killing the VPN altogether.

To solve that challenge, we released support for SSH connections through Cloudflare Access. The feature builds on top of our Argo Tunnel and Argo Smart Routing products.

Argo Smart Routing intelligently routes traffic around Cloudflare’s network, so that our engineers can connect to any data center in our fleet without suffering from Internet congestion. The Argo Tunnel product creates secure, outbound-only, connections from our data centers back to our network.

Team members can then use their SSH client to connect without any special wrappers or alternate commands. Our command-line tool, `cloudflared`, generates a single config file change and our engineers are ready to reach servers around the world.

We started by making our internal code repositories available in this flow. Users login with our SSO and can pull and submit new code without the friction of a private network. We then expanded the deployment to make it possible for our reliability engineering team to connect to the data centers that power Cloudflare’s network without a VPN.

You can read more about using our SSH workflow in your organization in the post here.

We can onboard users rapidly

Cloudflare continues to grow as we add new team members in locations around the world. Keeping a manual list of bookmarks for new users no longer scales.

With Cloudflare Access, we have the pieces that we need to remove that step in the onboarding flow. We released a feature, the Access App Launch, that gives our users a single location from which they can launch any application they should be able to reach with a single click.

For administrators, the App Launch does not require additional configuration for each app. The dashboard reads an organization’s Access policies and only presents apps to the end user that they already have permission to reach. Each team member has a personalized dashboard, out of the box, that they can use to navigate and find the tools they need. No onboarding sessions required.

How Cloudflare keeps employees productive from any location

How Cloudflare keeps employees productive from any location

You can read more about using our App Launch feature in your organization in the post here.

Our security team can add logging everywhere with one-click

When users leave the office, security teams can lose a real layer of a defense-in-depth strategy. Employees do not badge into a front desk when they work remotely.

Cloudflare Access addresses remote work blindspots by adding additional visibility into how applications are used. Access logs every authentication event and, if enabled, every user request made to a resource protected by the platform. Administrators can capture every request and attribute it to a user and IP address without any code changes. Cloudflare Access can help teams meet compliance and regulatory requirements for distributed users without any additional development time.

Our Security team uses this data to audit every request made to internal resources without interrupting any application owners.

You can read more about using our per-request logging in your organization in the post here.

How to get started

Your team can use all of the same features to stay

productive from any location with Cloudflare for Teams. And until September 1, it’s available to any organization for free.

We recognize that the Coronavirus emergency has put a strain on the infrastructure of companies around the world as more employees work from home. On March 9, Cloudflare made our Teams product, which helps support secure and efficient remote work, free for small businesses through September 1.

As the severity of the outbreak has become clearer over the course of this week, we decided to extend this offer to help any business, regardless of size. The healthy functioning of our economy globally depends on work continuing to get done, even as people need to do that work remotely. If Cloudflare can do anything to help ensure that happens, we believe it is our duty to do so.

If you are already a Cloudflare for Teams customer, we have removed the caps on usage during the COVID-19 emergency, so you can scale to whatever number of seats you need without additional cost.

If you are not yet using

online and secure from any location. To find out more about Cloudflare for Teams, and if you or your employer are struggling with limits on the capacity of your existing VPN or Firewall, we stand ready to help and have removed the limits on the free trials of our Access and Gateway products for at least the next six months. Cloudflare employees are running no-cost onboarding sessions so you can get set up quickly.

You can review the details and sign up for an onboarding session here:

developers.cloudflare.com/access/about/coronavirus-emergency/

visit teams.cloudflare.com.

If you’re looking to get started with Cloudflare Access today, it’s available on any Cloudflare plan. The first five seats are free. Follow the link here to get started.
Finally, need help in getting it up? A quick start guide is available here.

Best practices for Chrome Enterprise admins to enable a remote workforce

The content below is taken from the original ( Best practices for Chrome Enterprise admins to enable a remote workforce), to continue reading please visit the site. Remember to respect the Author & Copyright.

As more businesses consider how to enable their teams to work remotely, IT admins are increasingly thinking through how to best support a distributed workforce. Earlier this week we shared some tips and best practices, but if you’re an admin that uses Chromebooks or Chrome Browser in your environment, you might be wondering what specific steps you should consider. 

Here are a few ideas admins should think about to keep their workforce secure and productive:

Configure settings and policies to keep devices and data secure

Chromebooks have built-in security, but Chrome Enterprise administrators also have access to a wide range of settings to further mitigate risks from malware, phishing, and lost devices. 

First, we recommend checking Google Safe Browsing settings across your Chrome devices and managed browsers to ensure it’s enabled to warn users of malicious sites that might contain malware or be known for phishing. For Chrome devices, you then can check screen lock settings to make sure they automatically lock after being idle to reduce the likelihood of someone using the device when the employee is away. If needed, you can remotely disable a Chrome device directly from the Google Admin console in case a device is lost or stolen, and even post a message that lets the finder know where to return it.

Help employees stay productive with the right apps, network policies, and remote support

It’s a good idea to make sure Chrome devices have the right policies for Wi-Fi, Ethernet, and virtual private network (VPN) access, as well as the appropriate network certificates so employees can access corporate data from home. You can pre-install apps, extensions, and pre-load bookmarks so employees have easy access to sites such as internal intranet or HR pages when not in the office. Employees can also use VDI solutions on Chrome devices such as Citrix or VMware if they need remote access. If Chrome device users are having issues, they can agree to receive remote support with Chrome Remote Desktop. This feature enables you to access a user’s device and help resolve issues quickly.

Manage Chrome browser across platforms or use Cloudready

If you have multiple platforms in your organization, you can use the Google Admin console to manage Chrome Browser across Windows, Mac, iOS, Android, and Chrome OS at no additional cost. Using Chrome Browser Cloud Management, you can set and enforce policies, manage extensions, and get insights into your browser deployment. The policies can be applied across private and public networks for added controls in remote working environments. You can also use Cloudready to get a Chrome device environment on almost any device without replacing hardware.

Looking ahead

We’ll continue to share our learnings and best practices to enable your remote workforce. In the meantime, you can learn more in our Help Center.

COVID-19 impacts on Internet traffic: Seattle, Northern Italy and South Korea

The content below is taken from the original ( COVID-19 impacts on Internet traffic: Seattle, Northern Italy and South Korea), to continue reading please visit the site. Remember to respect the Author & Copyright.

COVID-19 impacts on Internet traffic: Seattle, Northern Italy and South Korea

The last few weeks have seen unprecedented changes in how people live and work around the world. Over time more and more companies have given their employees the right to work from home, restricted business travel and, in some cases, outright sent their entire workforce home. In some countries, quarantines are in place keeping people restricted to their homes.

These changes in daily life are showing up as changes in patterns of Internet use around the world. In this blog post I take a look at changing patterns in northern Italy, South Korea and the Seattle area of Washington state.

Seattle

To understand how Internet use is changing, it’s first helpful to start with what a normal pattern looks like. Here’s a chart of traffic from our Dallas point of presence in the middle of January 2020.

COVID-19 impacts on Internet traffic: Seattle, Northern Italy and South Korea

This is a pretty typical pattern. If you look carefully you can see that Internet use is down a little at the weekend and that Internet usage is diurnal: Internet use drops down during the night and then picks up again in the morning. The peaks occur at around 2100 local time and the troughs in the dead of night at around 0300. This sort of pattern repeats worldwide with the only real difference being whether a peak occurs in the early morning (at work) or evening (at home).

Now here’s Seattle in the first week of January this year. I’ve zoomed in to a single week so we see a little more of the bumpiness of traffic during the day but it’s pretty much the same story.

COVID-19 impacts on Internet traffic: Seattle, Northern Italy and South Korea

Now let’s zoom out to the time period January 15 to March 12. Here’s what the chart looks like for traffic coming from Cloudflare’s Seattle PoP over that period (the gaps in the chart are just missing data in the measurement tool I’m using).

COVID-19 impacts on Internet traffic: Seattle, Northern Italy and South Korea

Focus in on the beginning of the chart. Looks like the familiar diurnal pattern with quieter weekends. But around January 30 something changes. There’s a big spike of traffic and traffic stays elevated. The weekends aren’t so clear either. The first reported case of COVID-19 was on January 21 in the Seattle area.

Towards the end of February, the first deaths occurred in Washington state. In early March employees of Facebook, Microsoft and Amazon in the Seattle area were all confirmed to be infected. At this point, employers began encouraging or requiring their staff to work from home. If you focus on the last part of the chart and compare it with the first two things stand out: Internet usage has grown greatly and the night time troughs are less evident. People seem to be using the Internet more and for more hours.

Throughout the period there are also days with double spikes of traffic. If I zoom into the period March 5 to March 12 it’s interesting to compare with the week in January above.

COVID-19 impacts on Internet traffic: Seattle, Northern Italy and South Korea

Firstly, traffic is up about 40% and nighttime troughs are now above the levels seen in January during the day. The traffic is also spiky and continues through the weekend at similar levels to the week.

Next we can zoom in on traffic to residential ISPs in the Seattle area. Here’s a chart showing the first three days of this week (March 9 to March 11) compared to Monday to Wednesday a month prior in early February (February 10 to February 12).

COVID-19 impacts on Internet traffic: Seattle, Northern Italy and South Korea

Traffic to residential ISPs appears to be up about 5% month on month during the work day. We might have expected this to be higher given the number of local companies asking employees to work from home but many appear to be using VPNs that route all Internet traffic back through the corporate gateway.

Northern Italy

Turning to northern Italy, and in particular northern Italy, where there has been a serious outbreak of COVID-19 leading to first a local quarantine and then a national one. Most of the traffic in northern Italy is served from our Milan point of presence.

For reference here’s what traffic looked like the first week in January.

COVID-19 impacts on Internet traffic: Seattle, Northern Italy and South Korea

A familiar pattern with peak traffic typically in the evening. Here’s traffic for March 5 to 12.

COVID-19 impacts on Internet traffic: Seattle, Northern Italy and South Korea

Traffic has grown by more than 30% with Internet usage up at all hours of the day and night. Another change that’s a little harder to see is that traffic is ramping up earlier in the morning than in early January. In early January traffic started rising rapidly at 0900 UTC and reach the daytime plateaus you see above around 1400 UTC. In March, we see the traffic jump up more rapidly at 0900 UTC and reach a first plateau before tending to jump up again.

Drilling into the types of domains that Italians are accessing we see changes in how people are using the Internet. Online chat systems are up 1.3x to 3x of normal usage. Video streaming appears to have roughly doubled. People are accessing news and information websites about 30% to 60% more and online gaming is up about 20%.

One final look at northern Italy. Here’s the period that covers the introduction of the first cordon sanitaire in communes in the north.

COVID-19 impacts on Internet traffic: Seattle, Northern Italy and South Korea

The big spike of traffic is the evening of Monday, February 24 when the first cordons sanitaire came into full effect.

South Korea

Here’s the normal traffic pattern in Seoul, South Korea using the first week of January as an example of what traffic looked like before the outbreak of COVID-19:

COVID-19 impacts on Internet traffic: Seattle, Northern Italy and South Korea

And here’s March 5 to 12 for comparison:

COVID-19 impacts on Internet traffic: Seattle, Northern Italy and South Korea

There’s no huge change in traffic patterns other than that Internet traffic seen by Cloudflare is up about 5%.

Digging into the websites and APIs that people are accessing in South Korea shows some significant changes: traffic to websites offering anime streaming up over 2x, online chat up 1.2x to 1.8x and online gaming up about 30%.

In both northern Italy and South Korea traffic associated with fitness trackers is down, perhaps reflecting that people are unable to take part in their usual exercise, sports and fitness activities.

Conclusion

Cloudflare is watching carefully as Internet traffic patterns around the world alter as people alter their daily lives through home-working, cordon sanitaire, and social distancing. None of these traffic changes raise any concern for us. Cloudflare’s network is well provisioned to handle significant spikes in traffic. We have not seen, and do not anticipate, any impact on our network’s performance, reliability, or security globally.

Demoing HP iLO Mobile App [CCEN]

[https://www.youtube.com/watch?v=kIwAz-9N6tM], In this session of Coffee Coaching, Luis Luciani from the HP iLO Firmware Team, gives you an overview of the HP iLO Mobile Toolbox app and how you can use it to manage your HP ProLiant Server from a smart phone or tablet.

For more information:
http://bit.ly/19KRyJV
http://bit.ly/39PxJ8g
http://bit.ly/2Q42RJ6
http://bit.ly/2U04FnA

[Software, HPOfferings, iLO]

Introducing Microsoft Azure Stack

[https://www.youtube.com/watch?v=cDLiL90bojw], Join Mark Russinovich, Azure CTO, and Jeffrey Snover, Enterprise Cloud Technical Fellow, to learn how Azure Stack will help you drive app innovation by delivering the power of Azure in your datacenter.

Click here to learn more about Azure: http://bit.ly/2wRKVKR

World Record: Double Loop Dare at the 2012 X Games Los Angeles | Hot Wheels

[https://www.youtube.com/watch?v=c6PQ49B5Gpw], Team Hot Wheels drivers, Tanner Foust and Greg Tracy set a Guinness World Record racing two vehicles through a six-story double vertical loop at the 2012 X Games in Los Angeles! It’s Hot Wheels for real!

Watch more Hot Wheels videos: http://bit.ly/2wMzUdW
SUBSCRIBE: http://bit.ly/2IHl0Zk

About Hot Wheels:
For over 51 years, Hot Wheels has been passionate about creating the coolest and craziest toy cars and racetracks. From a line of 16 small 1:64-scale die-cast vehicles, today Hot Wheels has evolved into a a global lifestyle brand dedicated to fast action and over-the-top, epic stunts.

Connect with Hot Wheels Online:
Visit the official Hot Wheels WEBSITE: http://bit.ly/2TIRvMU
Like Hot Wheels on FACEBOOK: http://bit.ly/39JwZBp
Follow Hot Wheels on TWITTER: http://bit.ly/2W4EUVV
Follow Hot Wheels on INSTAGRAM: http://bit.ly/3aTuA7v

World Record: Double Loop Dare at the 2012 X Games Los Angeles | Hot Wheels
http://www.youtube.com/c/hotwheels

What’s Coffee Coaching! [CCEN]

[https://www.youtube.com/watch?v=ApcYWA747DM], Maciek Szczesniak, Director of Global SMB Business Development at HP, introduces you to the Microsoft HP Coffee Coaching Program.

For more information:
http://bit.ly/39PxJ8g
http://bit.ly/19KRyJV

Plan migration of physical servers using Azure Migrate

The content below is taken from the original ( Plan migration of physical servers using Azure Migrate), to continue reading please visit the site. Remember to respect the Author & Copyright.

At Microsoft Ignite, we announced new Microsoft Azure Migrate assessment capabilities that further simplify migration planning. In this post, I will talk about how you can plan migration of physical servers. Using this feature, you can also plan migration of virtual machines of any hypervisor or cloud. You can get started right away with these features by creating an Azure Migrate project or using an existing project.

Previously, Azure Migrate: Server Assessment only supported VMware and Hyper-V virtual machine assessments for migration to Azure. At Ignite 2019, we added physical server support for assessment features like Azure suitability analysis, migration cost planning, performance-based rightsizing, and application dependency analysis. You can now plan at-scale, assessing up to 35K physical servers in one Azure Migrate project. If you use VMware or Hyper-V as well, you can discover and assess both physical and virtual servers in the same project. You can create groups of servers, assess by group and refine the groups further using application dependency information.

While this feature is in preview, the preview is covered by customer support and can be used for production workloads. Let us look at how the assessment helps you plan migration.

clip_image002

Azure suitability analysis

The assessment checks Azure support for each server discovered and determines whether the server can be migrated as-is to Azure. If incompatibilities are found, remediation guidance is automatically provided. You can customize your assessment by changing its properties, and recomputing the assessment. Among other customizations, you can choose a virtual machine series of your choice and specify the uptime of the workloads you will run in Azure.

Cost estimation and sizing

Assessment also provides detailed cost estimates. Performance-based rightsizing assessments can be used to optimize on cost; the performance data of your on-premise server is used to recommend a suitable Azure Virtual Machine and disk SKU. This helps to optimize on cost and right-size as you migrate servers that might be over-provisioned in your on-premise data center. You can apply subscription offers and Reserved Instance pricing on the cost estimates.

clip_image004

Dependency analysis

Once you have established cost estimates and migration readiness, you can plan your migration phases. Using the dependency analysis feature, you can understand which workloads are interdependent and need to be migrated together. This also helps ensure you do not leave critical elements behind on-premise. You can visualize the dependencies in a map or extract the dependency data in a tabular format. You can divide your servers into groups and refine the groups for migration by reviewing the dependencies.

clip_image006

Assess your physical servers in four simple steps

  • Create an Azure Migrate project and add the Server Assessment solution to the project.
  • Set up the Azure Migrate appliance and start discovery of your server. To set up discovery, the server names or IP addresses are required. Each appliance supports discovery of 250 servers. You can set up more than one appliance if required.
  • Once you have successfully set up discovery, create assessments and review the assessment reports.
  • Use the application dependency analysis features to create and refine server groups to phase your migration.

When you are ready to migrate the servers to Azure, you can use Server Migration to carry out the migration. You can read more about migrating physical servers here. In the coming months, we will add support for application discovery and agentless dependency analysis on physical servers as well.

Note that the inventory metadata gathered is persisted in the geography you select while creating the project. You can select a geography of your choice. Server Assessment is available today in Asia Pacific, Australia, Brazil, Canada, Europe, France, India, Japan, Korea, United Kingdom, and United States geographies.

Get started right away by creating an Azure Migrate project. In the upcoming blogs, we will talk about import-based assessments, application discovery, and agentless dependency analysis.

Resources to get started

  1. Tutorial on how to assess physical servers using Azure Migrate: Server Assessment.
  2. Prerequisites for assessment of physical servers
  3. Guide on how to plan an assessment for a large-scale environment. Each appliance supports discovery of 250 servers. You can discover more servers by adding
  4. Tutorial on how to migrate physical servers using Azure Migrate: Server Migration.

Facebook’s photo-transfer tool opens to more users in Europe, LatAm and Africa

The content below is taken from the original ( Facebook’s photo-transfer tool opens to more users in Europe, LatAm and Africa), to continue reading please visit the site. Remember to respect the Author & Copyright.

Facebook is continuing to open up access to a data-porting data porting tool it launched in Ireland in December. The tool lets users of its network transfer photos and videos they have stored on its servers directly to another photo storage service, such as Google Photos, via encrypted transfer.

A Facebook spokesman confirmed to TechCrunch that access to the transfer tool is being rolled out today to the U.K., UK, the rest of the European Union and additional countries in Latin America and Africa.

Late last month Facebook also opened up access to multiple markets in APAC and LatAm, per the spokesman. The tech giant has previously said the tool will be available worldwide in the first half of 2020.

The setting to “transfer a copy of your photos and videos” is accessed via the Your Facebook Information settings menu.

The tool is based on code developed via Facebook’s participation in the Data Transfer Project (DTP) — a collaborative effort starting in 2018 and backed by the likes of Apple, Facebook, Google, Microsoft and Twitter — which who committed to build a common framework using open-source open source code for connecting any two online service providers in order to support “seamless, direct, user initiated portability of data between the two platforms.” platforms”.

In recent years the dominance of tech giants has led to an increase in competition complaints — garnering the attention of policymakers and regulators.

In the EU, for instance, competition regulators are now eyeing the data practices of tech giants, giants including Amazon, Facebook and Google. While, in the U.S., tech giants, US, tech giants including Google, Facebook, Amazon, Apple and Microsoft , are also facing antitrust scrutiny. And as more questions are being asked about antitrust, antitrust big tech has been under pressure to respond — hence the collective push on portability.

Last September Facebook also released a white paper laying out its thinking on data portability, portability which seeks to frame it as a challenge to privacy — in what looks like an attempt to lobby for a regulatory moat to limit portability of the personal data mountain it’s amassed on users.

At the same time, the release of a portability tool gives Facebook something to point regulators to when they come calling — even as the tool tools only allows users to port a very small portion of the personal data the service holds on them. Such tools are also only likely to be sought out by the minority of more tech-savvy tech savvy users.

Facebook’s transfer tool also currently only supports direct transfer to Google’s cloud storage — greasing a pipe for users to pass a copy of their facial biometrics from one tech giant to another.

We checked, and from our location in the EU, Google Photos is the only direct destination offered via Facebook’s drop-down menu thus far:

However the spokesman implied wider utility could be coming — saying the DTP project updated adapters for photos APIs from SmugMug Smugmug (which owns Flickr); and added new integrations for music streaming service Deezer; decentralized social network Mastodon; and Tim Berners-Lee’s decentralization project Solid.

He said the adapters are on a per-data-type basis, noting that open-source contributors are working on adapters for a range of data types (such as photos, playlists and contacts) — and pointing to a list of projects in development available on GitHub.

Though it’s not entirely clear why there’s no option offered as yet within Facebook to port direct to any of these other services. Presumably additional development work is still required by the third party to implement the direct data transfer. Asked about this the spokesman confirmed Google Photos is the only option for now, saying it’s “a first step” which he claimed “provides stakeholders with a tangible tool to assess while other companies join the DTP and we work toward transfers to different services and data types.” (We’ve asked Facebook for more on this and will update if we get a response.)

The aim of the DTP is to develop a standardized version to make it easier for others to join without having to “recreate the wheel every time they want to build portability tools,” tools”, as the spokesman put it, adding: “We built this tool with the support of current DTP partners, and hope that even more companies and partners will join us in the future.”

He also emphasized that the code is open source and claimed it’s “fairly straightforward” for a company that wishes to plug its service into the framework, framework especially if they already have  a public API.

“They just need to write a DTP adapter against that public API,” he suggested.

“Now that the tool has launched, we look forward to working with even more experts and companies especially startups and new platforms looking to provide an on-ramp for this type of service,” the spokesman added.

This report was updated with additional detail from Facebook

I wrote a free app that’ll help you sketch cloud architecture diagrams

The content below is taken from the original ( I wrote a free app that’ll help you sketch cloud architecture diagrams), to continue reading please visit the site. Remember to respect the Author & Copyright.

I wrote a free app that'll help you sketch cloud architecture diagrams I wrote a free app that'll help you sketch cloud architecture diagrams

I wrote an app that’ll help you sketch cloud architecture diagrams for free. All Azure, AWS, GCP, Kubernetes, CNCF icons are preloaded in the app. Hope the community finds it useful: cloudskew.com

Edit: Thank you for the reddit gold folks. Will pay it forward!

https://preview.redd.it/nr0alhb8exk41.png?width=1920&format=png&auto=webp&s=6dc24fc16a658eb02e2afd85c1cdcae0a07f4f46

submitted by /u/mithunshanbhag to r/AZURE
[link] [comments]

Introducing Raspberry Pi Imager, our new imaging utility

The content below is taken from the original ( in /r/ raspberry_pi), to continue reading please visit the site. Remember to respect the Author & Copyright.

http://bit.ly/2VMlSU5

The History of the URL

The content below is taken from the original ( The History of the URL), to continue reading please visit the site. Remember to respect the Author & Copyright.

The History of the URL

On the 11th of January 1982 twenty-two computer scientists met to discuss an issue with ‘computer mail’ (now known as email). Attendees included the guy who would create Sun Microsystems, the guy who made Zork, the NTP guy, and the guy who convinced the government to pay for Unix. The problem was simple: there were 455 hosts on the ARPANET and the situation was getting out of control.

The History of the URL

This issue was occuring now because the ARPANET was on the verge of switching from its original NCP protocol, to the TCP/IP protocol which powers what we now call the Internet. With that switch suddenly there would be a multitude of interconnected networks (an ‘Inter… net’) requiring a more ‘hierarchical’ domain system where ARPANET could resolve its own domains while the other networks resolved theirs.

Other networks at the time had great names like “COMSAT”, “CHAOSNET”, “UCLNET” and “INTELPOSTNET” and were maintained by groups of universities and companies all around the US who wanted to be able to communicate, and could afford to lease 56k lines from the phone company and buy the requisite PDP-11s to handle routing.

The History of the URL

In the original ARPANET design, a central Network Information Center (NIC) was responsible for maintaining a file listing every host on the network. The file was known as the HOSTS.TXT file, similar to the /etc/hosts file on a Linux or OS X system today. Every network change would require the NIC to FTP (a protocol invented in 1971) to every host on the network, a significant load on their infrastructure.

Having a single file list every host on the Internet would, of course, not scale indefinitely. The priority was email, however, as it was the predominant addressing challenge of the day. Their ultimate conclusion was to create a hierarchical system in which you could query an external system for just the domain or set of domains you needed. In their words: “The conclusion in this area was that the current ‘user@host’ mailbox identifier should be extended to ‘[email protected]’ where ‘domain’ could be a hierarchy of domains.” And the domain was born.

The History of the URL

It’s important to dispel any illusion that these decisions were made with prescience for the future the domain name would have. In fact, their elected solution was primarily decided because it was the “one causing least difficulty for existing systems.” For example, one proposal was for email addresses to be of the form <user>.<host>@<domain>. If email usernames of the day hadn’t already had ‘.’ characters you might be emailing me at ‘zack.cloudflare@com’ today.

The History of the URL

What is Cloudflare?

Cloudflare allows you to move caching, load balancing, rate limiting, and even network firewall and code execution out of your infrastructure to our points of presence within milliseconds of virtually every Internet user.

Read A Case Study
Contact Sales

UUCP and the Bang Path

It has been said that the principal function of an operating system is to define a number of different names for the same object, so that it can busy itself keeping track of the relationship between all of the different names. Network protocols seem to have somewhat the same characteristic.

— David D. Clark, 1982

Another failed proposal involved separating domain components with the exclamation mark (!). For example, to connect to the ISIA host on ARPANET, you would connect to !ARPA!ISIA. You could then query for hosts using wildcards, so !ARPA!* would return to you every ARPANET host.

This method of addressing wasn’t a crazy divergence from the standard, it was an attempt to maintain it. The system of exclamation separated hosts dates to a data transfer tool called UUCP created in 1976. If you’re reading this on an OS X or Linux computer, uucp is likely still installed and available at the terminal.

ARPANET was introduced in 1969, and quickly became a powerful communication tool… among the handful of universities and government institutions which had access to it. The Internet as we know it wouldn’t become publically available outside of research institutions until 1991, twenty one years later. But that didn’t mean computer users weren’t communicating.

The History of the URL

In the era before the Internet, the general method of communication between computers was with a direct point-to-point dial up connection. For example, if you wanted to send me a file, you would have your modem call my modem, and we would transfer the file. To craft this into a network of sorts, UUCP was born.

In this system, each computer has a file which lists the hosts its aware of, their phone number, and a username and password on that host. You then craft a ‘path’, from your current machine to your destination, through hosts which each know how to connect to the next:

sw-hosts!digital-lobby!zack

The History of the URL

This address would form not just a method of sending me files or connecting with my computer directly, but also would be my email address. In this era before ‘mail servers’, if my computer was off you weren’t sending me an email.

While use of ARPANET was restricted to top-tier universities, UUCP created a bootleg Internet for the rest of us. It formed the basis for both Usenet and the BBS system.

DNS

Ultimately, the DNS system we still use today would be proposed in 1983. If you run a DNS query today, for example using the dig tool, you’ll likely see a response which looks like this:

;; ANSWER SECTION:
google.com.   299 IN  A 172.217.4.206

This is informing us that google.com is reachable at 172.217.4.206. As you might know, the A is informing us that this is an ‘address’ record, mapping a domain to an IPv4 address. The 299 is the ‘time to live’, letting us know how many more seconds this value will be valid for, before it should be queried again. But what does the IN mean?

IN stands for ‘Internet’. Like so much of this, the field dates back to an era when there were several competing computer networks which needed to interoperate. Other potential values were CH for the CHAOSNET or HS for Hesiod which was the name service of the Athena system. CHAOSNET is long dead, but a much evolved version of Athena is still used by students at MIT to this day. You can find the list of DNS classes on the IANA website, but it’s no surprise only one potential value is in common use today.

TLDs

It is extremely unlikely that any other TLDs will be created.

— John Postel, 1994

Once it was decided that domain names should be arranged hierarchically, it became necessary to decide what sits at the root of that hierarchy. That root is traditionally signified with a single ‘.’. In fact, ending all of your domain names with a ‘.’ is semantically correct, and will absolutely work in your web browser: google.com.

The first TLD was .arpa. It allowed users to address their old traditional ARPANET hostnames during the transition. For example, if my machine was previously registered as hfnet, my new address would be hfnet.arpa. That was only temporary, during the transition, server administrators had a very important choice to make: which of the five TLDs would they assume? “.com”, “.gov”, “.org”, “.edu” or “.mil”.

When we say DNS is hierarchical, what we mean is there is a set of root DNS servers which are responsible for, for example, turning .com into the .com nameservers, who will in turn answer how to get to google.com. The root DNS zone of the internet is composed of thirteen DNS server clusters. There are only 13 server clusters, because that’s all we can fit in a single UDP packet. Historically, DNS has operated through UDP packets, meaning the response to a request can never be more than 512 bytes.

;       This file holds the information on root name servers needed to
;       initialize cache of Internet domain name servers
;       (e.g. reference this file in the "cache  .  "
;       configuration file of BIND domain name servers).
;
;       This file is made available by InterNIC 
;       under anonymous FTP as
;           file                /domain/named.cache
;           on server           FTP.INTERNIC.NET
;       -OR-                    RS.INTERNIC.NET
;
;       last update:    March 23, 2016
;       related version of root zone:   2016032301
;
; formerly NS.INTERNIC.NET
;
.                        3600000      NS    A.ROOT-SERVERS.NET.
A.ROOT-SERVERS.NET.      3600000      A     198.41.0.4
A.ROOT-SERVERS.NET.      3600000      AAAA  2001:503:ba3e::2:30
;
; FORMERLY NS1.ISI.EDU
;
.                        3600000      NS    B.ROOT-SERVERS.NET.
B.ROOT-SERVERS.NET.      3600000      A     192.228.79.201
B.ROOT-SERVERS.NET.      3600000      AAAA  2001:500:84::b
;
; FORMERLY C.PSI.NET
;
.                        3600000      NS    C.ROOT-SERVERS.NET.
C.ROOT-SERVERS.NET.      3600000      A     192.33.4.12
C.ROOT-SERVERS.NET.      3600000      AAAA  2001:500:2::c
;
; FORMERLY TERP.UMD.EDU
;
.                        3600000      NS    D.ROOT-SERVERS.NET.
D.ROOT-SERVERS.NET.      3600000      A     199.7.91.13
D.ROOT-SERVERS.NET.      3600000      AAAA  2001:500:2d::d
;
; FORMERLY NS.NASA.GOV
;
.                        3600000      NS    E.ROOT-SERVERS.NET.
E.ROOT-SERVERS.NET.      3600000      A     192.203.230.10
;
; FORMERLY NS.ISC.ORG
;
.                        3600000      NS    F.ROOT-SERVERS.NET.
F.ROOT-SERVERS.NET.      3600000      A     192.5.5.241
F.ROOT-SERVERS.NET.      3600000      AAAA  2001:500:2f::f
;
; FORMERLY NS.NIC.DDN.MIL
;
.                        3600000      NS    G.ROOT-SERVERS.NET.
G.ROOT-SERVERS.NET.      3600000      A     192.112.36.4
;
; FORMERLY AOS.ARL.ARMY.MIL
;
.                        3600000      NS    H.ROOT-SERVERS.NET.
H.ROOT-SERVERS.NET.      3600000      A     198.97.190.53
H.ROOT-SERVERS.NET.      3600000      AAAA  2001:500:1::53
;
; FORMERLY NIC.NORDU.NET
;
.                        3600000      NS    I.ROOT-SERVERS.NET.
I.ROOT-SERVERS.NET.      3600000      A     192.36.148.17
I.ROOT-SERVERS.NET.      3600000      AAAA  2001:7fe::53
;
; OPERATED BY VERISIGN, INC.
;
.                        3600000      NS    J.ROOT-SERVERS.NET.
J.ROOT-SERVERS.NET.      3600000      A     192.58.128.30
J.ROOT-SERVERS.NET.      3600000      AAAA  2001:503:c27::2:30
;
; OPERATED BY RIPE NCC
;
.                        3600000      NS    K.ROOT-SERVERS.NET.
K.ROOT-SERVERS.NET.      3600000      A     193.0.14.129
K.ROOT-SERVERS.NET.      3600000      AAAA  2001:7fd::1
;
; OPERATED BY ICANN
;
.                        3600000      NS    L.ROOT-SERVERS.NET.
L.ROOT-SERVERS.NET.      3600000      A     199.7.83.42
L.ROOT-SERVERS.NET.      3600000      AAAA  2001:500:9f::42
;
; OPERATED BY WIDE
;
.                        3600000      NS    M.ROOT-SERVERS.NET.
M.ROOT-SERVERS.NET.      3600000      A     202.12.27.33
M.ROOT-SERVERS.NET.      3600000      AAAA  2001:dc3::35
; End of file

Root DNS servers operate in safes, inside locked cages. A clock sits on the safe to ensure the camera feed hasn’t been looped. Particularily given how slow DNSSEC implementation has been, an attack on one of those servers could allow an attacker to redirect all of the Internet traffic for a portion of Internet users. This, of course, makes for the most fantastic heist movie to have never been made.

Unsurprisingly, the nameservers for top-level TLDs don’t actually change all that often. 98% of the requests root DNS servers receive are in error, most often because of broken and toy clients which don’t properly cache their results. This became such a problem that several root DNS operators had to spin up special servers just to return ‘go away’ to all the people asking for reverse DNS lookups on their local IP addresses.

The TLD nameservers are administered by different companies and governments all around the world (Verisign manages .com). When you purchase a .com domain, about $0.18 goes to the ICANN, and $7.85 goes to Verisign.

Punycode

It is rare in this world that the silly name us developers think up for a new project makes it into the final, public, product. We might name the company database Delaware (because that’s where all the companies are registered), but you can be sure by the time it hits production it will be CompanyMetadataDatastore. But rarely, when all the stars align and the boss is on vacation, one slips through the cracks.

Punycode is the system we use to encode unicode into domain names. The problem it is solving is simple, how do you write 比薩.com when the entire internet system was built around using the ASCII alphabet whose most foreign character is the tilde?

It’s not a simple matter of switching domains to use unicode. The original documents which govern domains specify they are to be encoded in ASCII. Every piece of internet hardware from the last fourty years, including the Cisco and Juniper routers used to deliver this page to you make that assumption.

The web itself was never ASCII-only. It was actually originally concieved to speak ISO 8859-1 which includes all of the ASCII characters, but adds an additional set of special characters like ¼ and letters with special marks like ä. It does not, however, contain any non-Latin characters.

This restriction on HTML was ultimately removed in 2007 and that same year Unicode became the most popular character set on the web. But domains were still confined to ASCII.

The History of the URL

As you might guess, Punycode was not the first proposal to solve this problem. You most likely have heard of UTF-8, which is a popular way of encoding Unicode into bytes (the 8 is for the eight bits in a byte). In the year 2000 several members of the Internet Engineering Task Force came up with UTF-5. The idea was to encode Unicode into five bit chunks. You could then map each five bits into a character allowed (A-V & 0-9) in domain names. So if I had a website for Japanese language learning, my site 日本語.com would become the cryptic M5E5M72COA9E.com.

This encoding method has several disadvantages. For one, A-V and 0-9 are used in the output encoding, meaning if you wanted to actually include one of those characters in your doman, it had to be encoded like everything else. This made for some very long domains, which is a serious problem when each segment of a domain is restricted to 63 characters. A domain in the Myanmar language would be restricted to no more than 15 characters. The proposal does make the very interesting suggestion of using UTF-5 to allow Unicode to be transmitted by Morse code and telegram though.

There was also the question of how to let clients know that this domain was encoded so they could display them in the appropriate Unicode characters, rather than showing M5E5M72COA9E.com in my address bar. There were several suggestions, one of which was to use an unused bit in the DNS response. It was the “last unused bit in the header”, and the DNS folks were “very hesitant to give it up” however.

Another suggestion was to start every domain using this encoding method with ra--. At the time (mid-April 2000), there were no domains which happened to start with those particular characters. If I know anything about the Internet, someone registered an ra-- domain out of spite immediately after the proposal was published.

The ultimate conclusion, reached in 2003, was to adopt a format called Punycode which included a form of delta compression which could dramatically shorten encoded domain names. Delta compression is a particularily good idea because the odds are all of the characters in your domain are in the same general area within Unicode. For example, two characters in Farsi are going to be much closer together than a Farsi character and another in Hindi. To give an example of how this works, if we take the nonsense phrase:

يذؽ

In an uncompressed format, that would be stored as the three characters [1610, 1584, 1597] (based on their Unicode code points). To compress this we first sort it numerically (keeping track of where the original characters were): [1584, 1597, 1610]. Then we can store the lowest value (1584), and the delta between that value and the next character (13), and again for the following character (23), which is significantly less to transmit and store.

Punycode then (very) efficiently encodes those integers into characters allowed in domain names, and inserts an xn-- at the beginning to let consumers know this is an encoded domain. You’ll notice that all the Unicode characters end up together at the end of the domain. They don’t just encode their value, they also encode where they should be inserted into the ASCII portion of the domain. To provide an example, the website 熱狗sales.com becomes xn--sales-r65lm0e.com. Anytime you type a Unicode-based domain name into your browser’s address bar, it is encoded in this way.

This transformation could be transparent, but that introduces a major security problem. All sorts of Unicode characters print identically to existing ASCII characters. For example, you likely can’t see the difference between Cyrillic small letter a (“а”) and Latin small letter a (“a”). If I register Cyrillic аmazon.com (xn--mazon-3ve.com), and manage to trick you into visiting it, it’s gonna be hard to know you’re on the wrong site. For that reason, when you visit 🍕💩.ws, your browser somewhat lamely shows you xn--vi8hiv.ws in the address bar.

Protocol

The first portion of the URL is the protocol which should be used to access it. The most common protocol is http, which is the simple document transfer protocol Tim Berners-Lee invented specifically to power the web. It was not the only option. Some people believed we should just use Gopher. Rather than being general-purpose, Gopher is specifically designed to send structured data similar to how a file tree is structured.

For example, if you request the /Cars endpoint, it might return:

1Chevy Camaro             /Archives/cars/cc     gopher.cars.com     70
iThe Camero is a classic  fake                  (NULL)              0
iAmerican Muscle car      fake                  (NULL)              0
1Ferrari 451              /Factbook/ferrari/451  gopher.ferrari.net 70

which identifies two cars, along with some metadata about them and where you can connect to for more information. The understanding was your client would parse this information into a usable form which linked the entries with the destination pages.

The History of the URL

The first popular protocol was FTP, which was created in 1971, as a way of listing and downloading files on remote computers. Gopher was a logical extension of this, in that it provided a similar listing, but included facilities for also reading the metadata about entries. This meant it could be used for more liberal purposes like a news feed or a simple database. It did not have, however, the freedom and simplicity which characterizes HTTP and HTML.

HTTP is a very simple protocol, particularily when compared to alternatives like FTP or even the HTTP/3 protocol which is rising in popularity today. First off, HTTP is entirely text based, rather than being composed of bespoke binary incantations (which would have made it significantly more efficient). Tim Berners-Lee correctly intuited that using a text-based format would make it easier for generations of programmers to develop and debug HTTP-based applications.

HTTP also makes almost no assumptions about what you’re transmitting. Despite the fact that it was invented expliticly to accompany the HTML language, it allows you to specify that your content is of any type (using the MIME Content-Type, which was a new invention at the time). The protocol itself is rather simple:

A request:

GET /index.html HTTP/1.1 Host: www.example.com

Might respond:

HTTP/1.1 200 OK
Date: Mon, 23 May 2005 22:38:34 GMT
Content-Type: text/html; charset=UTF-8
Content-Encoding: UTF-8
Content-Length: 138
Last-Modified: Wed, 08 Jan 2003 23:11:55 GMT
Server: Apache/1.3.3.7 (Unix) (Red-Hat/Linux)
ETag: "3f80f-1b6-3e1cb03b"
Accept-Ranges: bytes
Connection: close

<html>
    <head>
        <title>An Example Page</title>
    </head>
    <body>
        Hello World, this is a very simple HTML document.
    </body>
</html>

To put this in context, you can think of the networking system the Internet uses as starting with IP, the Internet Protocol. IP is responsible for getting a small packet of data (around 1500 bytes) from one computer to another. On top of that we have TCP, which is responsible for taking larger blocks of data like entire documents and files and sending them via many IP packets reliably. On top of that, we then implement a protocol like HTTP or FTP, which specifies what format should be used to make the data we send via TCP (or UDP, etc.) understandable and meaningful.

In other words, TCP/IP sends a whole bunch of bytes to another computer, the protocol says what those bytes should be and what they mean.

You can make your own protocol if you like, assemblying the bytes in your TCP messages however you like. The only requirement is that whoever you are talking to speaks the same language. For this reason, it’s common to standardize these protocols.

There are, of course, many less important protocols to play with. For example there is a Quote of The Day protocol (port 17), and a Random Characters protocol (port 19). They may seem silly today, but they also showcase just how important that a general-purpose document transmission format like HTTP was.

Port

The timeline of Gopher and HTTP can be evidenced by their default port numbers. Gopher is 70, HTTP 80. The HTTP port was assigned (likely by Jon Postel at the IANA) at the request of Tim Berners-Lee sometime between 1990 and 1992.

This concept, of registering ‘port numbers’ predates even the Internet. In the original NCP protocol which powered the ARPANET remote addresses were identified by 40 bits. The first 32 identified the remote host, similar to how an IP address works today. The last eight were known as the AEN (it stood for “Another Eight-bit Number”), and were used by the remote machine in the way we use a port number, to separate messages destined for different processes. In other words, the address specifies which machine the message should go to, and the AEN (or port number) tells that remote machine which application should get the message.

They quickly requested that users register these ‘socket numbers’ to limit potential collisions. When port numbers were expanded to 16 bits by TCP/IP, that registration process was continued.

While protocols have a default port, it makes sense to allow ports to also be specified manually to allow for local development and the hosting of multiple services on the same machine. That same logic was the basis for prefixing websites with www.. At the time, it was unlikely anyone was getting access to the root of their domain, just for hosting an ‘experimental’ website. But if you give users the hostname of your specific machine (dx3.cern.ch), you’re in trouble when you need to replace that machine. By using a common subdomain (www.cern.ch) you can change what it points to as needed.

The Bit In-between

As you probably know, the URL syntax places a double slash (//) between the protocol and the rest of the URL:

http://cloudflare.com

That double slash was inherited from the Apollo computer system which was one of the first networked workstations. The Apollo team had a similar problem to Tim Berners-Lee: they needed a way to separate a path from the machine that path is on. Their solution was to create a special path format:

//computername/file/path/as/usual

And TBL copied that scheme. Incidentally, he now regrets that decision, wishing the domain (in this case example.com) was the first portion of the path:

http:com/example/foo/bar/baz

URLs were never intended to be what they’ve become: an arcane way for a user to identify a site on the Web. Unfortunately, we’ve never been able to standardize URNs, which would give us a more useful naming system. Arguing that the current URL system is sufficient is like praising the DOS command line, and stating that most people should simply learn to use command line syntax. The reason we have windowing systems is to make computers easier to use, and more widely used. The same thinking should lead us to a superior way of locating specific sites on the Web.

— Dale Dougherty 1996

There are several different ways to understand the ‘Internet’. One is as a system of computers connected using a computer network. That version of the Internet came into being in 1969 with the creation of the ARPANET. Mail, files and chat all moved over that network before the creation of HTTP, HTML, or the ‘web browser’.

In 1992 Tim Berners-Lee created three things, giving birth to what we consider the Internet. The HTTP protocol, HTML, and the URL. His goal was to bring ‘Hypertext’ to life. Hypertext at its simplest is the ability to create documents which link to one another. At the time it was viewed more as a science fiction panacea, to be complimented by Hypermedia, and any other word you could add ‘Hyper’ in front of.

The key requirement of Hypertext was the ability to link from one document to another. In TBL’s time though, these documents were hosted in a multitude of formats and accessed through protocols like Gopher and FTP. He needed a consistent way to refer to a file which encoded its protocol, its host on the Internet, and where it existed on that host.

At the original World-Wide Web presentation in March of 1992 TBL described it as a ‘Universal Document Identifier’ (UDI). Many different formats were considered for this identifier:

protocol: aftp host: xxx.yyy.edu path: /pub/doc/README
 
PR=aftp; H=xx.yy.edu; PA=/pub/doc/README;
 
PR:aftp/xx.yy.edu/pub/doc/README
 
/aftp/xx.yy.edu/pub/doc/README

This document also explains why spaces must be encoded in URLs (%20):

The use of white space characters has been avoided in UDIs: spaces > are not legal characters. This was done because of the frequent > introduction of extraneous white space when lines are wrapped by > systems such as mail, or sheer necessity of narrow column width, and > because of the inter-conversion of various forms of white space > which occurs during character code conversion and the transfer of > text between applications.

What’s most important to understand is that the URL was fundamentally just an abbreviated way of refering to the combination of scheme, domain, port, credentials and path which previously had to be understood contextually for each different communication system.

It was first officially defined in an RFC published in 1994.

scheme:[//[user:password@]host[:port]][/]path[?query][#fragment]

This system made it possible to refer to different systems from within Hypertext, but now that virtually all content is hosted over HTTP, may not be as necessary anymore. As early as 1996 browsers were already inserting the http:// and www. for users automatically (rendering any advertisement which still contains them truly ridiculous).

Path

I do not think the question is whether people can learn the meaning of the URL, I just find it it morally abhorrent to force grandma or grandpa to understand what, in the end, are UNIX file system conventions.

— Israel del Rio 1996

The slash separated path component of a URL should be familiar to any user of any computer built in the last fifty years. The hierarchal filesystem itself was introduced by the MULTICS system. Its creator, in turn, attributes it to a two hour conversation with Albert Einstein he had in 1952.

MULTICS used the greater than symbol (>) to separated file path components. For example:

>usr>bin>local>awk

That was perfectly logical, but unfortunately the Unix folks decided to use > to represent redirection, delegating path separation to the forward slash (/).

Snapchat the Supreme Court

Wrong. We are I now see clearly *disagreeing*. You and I.

As a person I reserve the right to use different criteria for different purposes. I want to be able to give names to generic works, AND to particular translations AND to particular versions. I want a richer world than you propose. I don’t want to be constrained by your two-level system of “documents” and “variants”.

— Tim Berners-Lee 1993

One half of the URLs referenced by US Supreme Court opinions point to pages which no longer exist. If you were reading an academic paper in 2011, written in 2001, you have better than even odds that any given URL won’t be valid.

There was a fervent belief in 1993 that the URL would die, in favor of the ‘URN’. The Uniform Resource Name is a permanent reference to a given piece of content which, unlike a URL, will never change or break. Tim Berners-Lee first described the “urgent need” for them as early as 1991.

The simplest way to craft a URN might be to simply use a cryptographic hash of the contents of the page, for example: urn:791f0de3cfffc6ec7a0aacda2b147839. This method doesn’t meet the criteria of the web community though, as it wasn’t really possible to figure out who to ask to turn that hash into a piece of real content. It also didn’t account for the format changes which often happen to files (compressed vs uncompressed for example) which nevertheless represent the same content.

The History of the URL

In 1996 Keith Shafer and several others proposed a solution to the problem of broken URLs. The link to this solution is now broken. Roy Fielding posted an implementation suggestion in July of 1995. That link is now broken.

I was able to find these pages through Google, which has functionally made page titles the URN of today. The URN format was ultimately finalized in 1997, and has essentially never been used since. The implementation is itself interesting. Each URN is composed of two components, an authority who can resolve a given type of URN, and the specific ID of this document in whichever format the authority understands. For example, urn:isbn:0131103628 will identify a book, forming a permanent link which can (hopefully) be turned into a set of URLs by your local isbn resolver.

Given the power of search engines, it’s possible the best URN format today would be a simple way for files to point to their former URLs. We could allow the search engines to index this information, and link us as appropriate:

<!-- On http://zack.is/history -->
<link rel="past-url" href="http://zackbloom.com/history.html">
<link rel="past-url" href="http://zack.is/history.html">

Query Params

The “application/x-www-form-urlencoded” format is in many ways an aberrant monstrosity, the result of many years of implementation accidents and compromises leading to a set of requirements necessary for interoperability, but in no way representing good design practices.

WhatWG URL Spec

If you’ve used the web for any period of time, you are familiar with query parameters. They follow the path portion of the URL, and encode options like ?name=zack&state=mi. It may seem odd to you that queries use the ampersand character (&) which is the same character used in HTML to encode special characters. In fact, if you’ve used HTML for any period of time, you likely have had to encode ampersands in URLs, turning http://host/?x=1&y=2 into http://host/?x=1&amp;y=2 or http://host?x=1&#38;y=2 (that particular confusion has always existed).

You may have also noticed that cookies follow a similar, but different format: x=1;y=2 which doesn’t actually conflict with HTML character encoding at all. This idea was not lost on the W3C, who encouraged implementers to support ; as well as & in query parameters as early as 1995.

Originally, this section of the URL was strictly used for searching ‘indexes’. The Web was originally created (and its funding was based on it creating) a method of collaboration for high energy physicists. This is not to say Tim Berners-Lee didn’t know he was really creating a general-purpose communication tool. He didn’t add support for tables for years, which is probably something physicists would have needed.

In any case, these ‘physicists’ needed a way of encoding and linking to information, and a way of searching that information. To provide that, Tim Berners-Lee created the <ISINDEX> tag. If <ISINDEX> appeared on a page, it would inform the browser that this is a page which can be searched. The browser should show a search field, and allow the user to send a query to the server.

That query was formatted as keywords separated by plus characters (+):

http://cernvm/FIND/?sgml+cms

In fantastic Internet fashion, this tag was quickly abused to do all manner of things including providing an input to calculate square roots. It was quickly proposed that perhaps this was too specific, and we really needed a general purpose <input> tag.

That particular proposal actually uses plus signs to separate the components of what otherwise looks like a modern GET query:

http://somehost.somewhere/some/path?x=xxxx+y=yyyy+z=zzzz

This was far from universally acclaimed. Some believed we needed a way of saying that the content on the other side of links should be searchable:

<a HREF="wais://quake.think.com/INFO" INDEX=1>search</a>

Tim Berners-Lee thought we should have a way of defining strongly-typed queries:

<ISINDEX TYPE="iana:/www/classes/query/personalinfo">

I can be somewhat confident in saying, in retrospect, I am glad the more generic solution won out.

The real work on <INPUT> began in January of 1993 based on an older SGML type. It was (perhaps unfortunately), decided that <SELECT> inputs needed a separate, richer, structure:

<select name=FIELDNAME type=CHOICETYPE [value=VALUE] [help=HELPUDI]> 
    <choice>item 1
    <choice>item 2
    <choice>item 3
</select>

If you’re curious, reusing <li>, rather than introducing the <option> element was absolutely considered. There were, of course, alternative form proposals. One included some variable substituion evocative of what Angular might do today:

<ENTRYBLANK TYPE=int LENGTH=length DEFAULT=default VAR=lval>Prompt</ENTRYBLANK>
<QUESTION TYPE=float DEFAULT=default VAR=lval>Prompt</QUESTION>
<CHOICE DEFAULT=default VAR=lval>
    <ALTERNATIVE VAL=value1>Prompt1 ...
    <ALTERNATIVE VAL=valuen>Promptn
</CHOICE>

In this example the inputs are checked against the type specified in type, and the VAR values are available on the page for use in string substitution in URLs, à la:

http://cloudflare.com/apps/$appId

Additional proposals actually used @, rather than =, to separate query components:

name@value+name@(value&value)

It was Marc Andreessen who suggested our current method based on what he had already implemented in Mosaic:

name=value&name=value&name=value

Just two months later Mosaic would add support for method=POST forms, and ‘modern’ HTML forms were born.

Of course, it was also Marc Andreessen’s company Netscape who would create the cookie format (using a different separator). Their proposal was itself painfully shortsighted, led to the attempt to introduce a Set-Cookie2 header, and introduced fundamental structural issues we still deal with at Cloudflare to this day.

Fragments

The portion of the URL following the ‘#’ is known as the fragment. Fragments were a part of URLs since their initial specification, used to link to a specific location on the page being loaded. For example, if I have an anchor on my site:

<a name="bio"></a>

I can link to it:

http://zack.is/#bio

This concept was gradually extended to any element (rather than just anchors), and moved to the id attribute rather than name:

<h1 id="bio">Bio</h1>

Tim Berners-Lee decided to use this character based on its connection to addresses in the United States (despite the fact that he’s British by birth). In his words:

In a snail mail address in the US at least, it is common
to use the number sign for an apartment number or suite
number within a building. So 12 Acacia Av #12 means “The
building at 12 Acacia Av, and then within that the unit
known numbered 12”. It seemed to be a natural character
for the task. Now, http://bit.ly/2Ilg8Zx means
“Within resource http://bit.ly/38uDo1H, the
particular view of it known as bar”.

It turns out that the original Hypertext system, created by Douglas Englebart, also used the ‘#’ character for the same purpose. This may be coincidental or it could be a case of accidental “idea borrowing”.

Fragments are explicitly not included in HTTP requests, meaning they only live inside the browser. This concept proved very valuable when it came time to implement client-side navigation (before pushState was introduced). Fragments were also very valuable when it came time to think about how we can store state in URLs without actually sending it to the server. What could that mean? Let’s explore:

Molehills and Mountains

There is a whole standard, as yukky as SGML, on Electronic data Intercahnge [sic], meaning forms and form submission. I know no more except it looks like fortran backwards with no spaces.

— Tim Berners-Lee 1993

There is a popular perception that the internet standards bodies didn’t do much from the finalization of HTTP 1.1 and HTML 4.01 in 2002 to when HTML 5 really got on track. This period is also known (only by me) as the Dark Age of XHTML. The truth is though, the standardization folks were fantastically busy. They were just doing things which ultimately didn’t prove all that valuable.

One such effort was the Semantic Web. The dream was to create a Resource Description Framework (editorial note: run away from any team which seeks to create a framework), which would allow metadata about content to be universally expressed. For example, rather than creating a nice web page about my Corvette Stingray, I could make an RDF document describing its size, color, and the number of speeding tickets I had gotten while driving it.

This is, of course, in no way a bad idea. But the format was XML based, and there was a big chicken-and-egg problem between having the entire world documented, and having the browsers do anything useful with that documentation.

It did however provide a powerful environment for philosophical argument. One of the best such arguments lasted at least ten years, and was known by the masterful codename ‘httpRange-14’.

httpRange-14 sought to answer the fundamental question of what a URL is. Does a URL always refer to a document, or can it refer to anything? Can I have a URL which points to my car?

They didn’t attempt to answer that question in any satisfying manner. Instead they focused on how and when we can use 303 redirects to point users from links which aren’t documents to ones which are, and when we can use URL fragments (the bit after the ‘#’) to point users to linked data.

To the pragmatic mind of today, this might seem like a silly question. To many of us, you can use a URL for whatever you manage to use it for, and people will use your thing or they won’t. But the Semantic Web cares for nothing more than semantics, so it was on.

This particular topic was discussed on July 1st 2002, July 15th 2002, July 22nd 2002, July 29th 2002, September 16th 2002, and at least 20 other occasions through 2005. It was resolved by the great ‘httpRange-14 resolution’ of 2005, then reopened by complaints in 2007 and 2011 and a call for new solutions in 2012. The question was heavily discussed by the pedantic web group, which is very aptly named. The one thing which didn’t happen is all that much semantic data getting put on the web behind any sort of URL.

Auth

As you may know, you can include a username and password in URLs:

http://zack:[email protected]

The browser then encodes this authentication data into Base64, and sends it as a header:

Authentication: Basic emFjazpzaGhoaGho

The only reason for the Base64 encoding is to allow characters which might not be valid in a header, it provides no obscurity to the username and password values.

Particularily over the pre-SSL internet, this was very problematic. Anyone who could snoop on your connection could easily see your password. Many alternatives were proposed including Kerberos which is a widely used security protocol both then and now.

As with so many of these examples though, the simple basic auth proposal was easiest for browser manufacturers (Mosaic) to implement. This made it the first, and ultimately the only, solution until developers were given the tools to build their own authentication systems.

The Web Application

In the world of web applications, it can be a little odd to think of the basis for the web being the hyperlink. It is a method of linking one document to another, which was gradually augmented with styling, code execution, sessions, authentication, and ultimately became the social shared computing experience so many 70s researchers were trying (and failing) to create. Ultimately, the conclusion is just as true for any project or startup today as it was then: all that matters is adoption. If you can get people to use it, however slipshod it might be, they will help you craft it into what they need. The corollary is, of course, no one is using it, it doesn’t matter how technically sound it might be. There are countless tools which millions of hours of work went into which precisely no one uses today.

This was adapted from a post which originally appeared on the Eager blog. In 2016 Eager become Cloudflare Apps.

What is Cloudflare?

Cloudflare allows you to move caching, load balancing, rate limiting, and even network firewall and code execution out of your infrastructure to our points of presence within milliseconds of virtually every Internet user.

Read A Case Study
Contact Sales

Sia’s First Enterprise Partnership is a Reality

The content below is taken from the original ( Sia’s First Enterprise Partnership is a Reality), to continue reading please visit the site. Remember to respect the Author & Copyright.

Big news! StoreWise has just partnered with industry leader ClearCenter to create ClearSHARE, blockchain-backed distributed data storage that’s to be integrated across its product lines, all built on the Sia protocol!

https://medium.com/storewise/clearcenter-and-storewise-join-forces-to-create-clearshare-the-future-of-decentralized-data-737cd5c952ee

submitted by /u/MeijeSibbel to r/siacoin
[link] [comments]

IBM and Microsoft support the Vatican’s guidelines for ethical AI

The content below is taken from the original ( IBM and Microsoft support the Vatican’s guidelines for ethical AI), to continue reading please visit the site. Remember to respect the Author & Copyright.

IBM and Microsoft have signed the Vatican's "Rome Call for AI Ethics," a pledge to develop artificial intelligence in a way that protects all people and the planet, Financial Times reports. Microsoft President Brad Smith and John Kelly, IBM's executi…

Videos from the Southwest Show now online

The content below is taken from the original ( Videos from the Southwest Show now online), to continue reading please visit the site. Remember to respect the Author & Copyright.

The RISC OS Southwest Show 2020 took place at the Arnos Manor Hotel in Bristol last week, and as usual the show talks were recorded. These are now online for the benefit of anyone who couldn’t make the show, or could but couldn’t attend the talks. In the order they took place, they are: Richard […]

HPE ProLiant MicroServer Gen10 Plus v Gen10 Hardware Overview

The content below is taken from the original ( HPE ProLiant MicroServer Gen10 Plus v Gen10 Hardware Overview), to continue reading please visit the site. Remember to respect the Author & Copyright.

We take the MicroServer Gen10 and the brand new HPE ProLiant MicroServer Gen10 Plus and do a side-by-side teardown to show you the hardware differences

The post HPE ProLiant MicroServer Gen10 Plus v Gen10 Hardware Overview appeared first on ServeTheHome.

Edgecore AS7712-32X Switch Overview A 32x 100GbE Switch

The content below is taken from the original ( Edgecore AS7712-32X Switch Overview A 32x 100GbE Switch), to continue reading please visit the site. Remember to respect the Author & Copyright.

We take apart the Edgecore AS7712-32X to show you what this 3.2Tbps 32x 100GbE bare metal switch has to offer and why it is so popular

The post Edgecore AS7712-32X Switch Overview A 32x 100GbE Switch appeared first on ServeTheHome.