UK hits its 95 percent ‘superfast’ broadband coverage target

The content below is taken from the original ( UK hits its 95 percent ‘superfast’ broadband coverage target), to continue reading please visit the site. Remember to respect the Author & Copyright.

'Superfast' broadband with speeds of at least 24 Mbps is now available across 95 percent of the UK, according to new stats thinkbroadband.com published today. The milestone was actually achieved last month, meaning the government's Broadband Delivery…

Voicelabs launches Alpine to bring retailers to the voice shopping ecosystem

The content below is taken from the original ( Voicelabs launches Alpine to bring retailers to the voice shopping ecosystem), to continue reading please visit the site. Remember to respect the Author & Copyright.

 Voicelabs, a company that has been experimenting in the voice computing market for some time with initiatives in advertising and analytics, is now pivoting its business again – this time, to voice-enabled commerce. The company is today launching its latest product out of stealth: Alpine.AI, a solution that builds voice shopping apps for retailers by importing their catalog, then layering… Read More

How to move files between Office 365, SharePoint and OneDrive

The content below is taken from the original ( How to move files between Office 365, SharePoint and OneDrive), to continue reading please visit the site. Remember to respect the Author & Copyright.

Last year, Microsoft announced that they would allow copying files using Office 365. But now onwards, Microsoft allows users to movie files in Office 365 with full fidelity protections for metadata and version management. Thus it helps in easing up […]

This post How to move files between Office 365, SharePoint and OneDrive is from TheWindowsClub.com.

Trueface.ai integrates with IFTTT as the latest test-case of its facial recognition tech

The content below is taken from the original ( Trueface.ai integrates with IFTTT as the latest test-case of its facial recognition tech), to continue reading please visit the site. Remember to respect the Author & Copyright.

Trueface.ai, the stealthy facial recognition startup that’s backed by 500 Startups and a slew of angel investors, is integrating with IFTTT IFTT to allow developers to start playing around with its technology. Chief executive, Shaun Moore tells me that the integration with IFTT represents the first time that facial recognition technology will be made available to the masses without the need… Read More

New Whitepaper: Separating Multi-Cloud Strategy from Hype

The content below is taken from the original ( New Whitepaper: Separating Multi-Cloud Strategy from Hype), to continue reading please visit the site. Remember to respect the Author & Copyright.

Is multi-cloud a strategy for avoiding vendor lock-in?

A 2017 RightScale survey* reported that 85% of enterprises have embraced a multi-cloud strategy. However, depending on whom you ask, multi-cloud is either an essential enterprise strategy or a nonsense buzzword.

Part of the reason for such opposing views is that we lack a complete definition of multi-cloud.

What is multi-cloud? There is little controversy in stating that multi-cloud is “the simultaneous use of multiple cloud vendors,” but to what end, exactly? Many articles superficially claim that multi-cloud is a strategy for avoiding vendor lock-in, for implementing high availability, for allowing teams to deploy the best platform for their app, and the list goes on.

But where can teams really derive the most benefit from a multi-cloud strategy? Without any substance to these claims, it can be difficult to determine if multi-cloud can live past its 15 minutes of fame.

Is multi-cloud a strategy for avoiding vendor lock-in?

Of the many benefits associated with multi-cloud, avoiding vendor lock-in is probably the most cited reason for a multi-cloud strategy. In a recent Stratoscale survey, more than 80% of enterprises reported moderate to high levels of concern about being locked into a single public cloud platform.

more than 80% of enterprises reported moderate to high levels of concern about being locked in to a single public cloud platform.

How you see vendor lock-in depends on your organization’s goals. For some companies, avoiding vendor lock-in is a core business requirement or a way to achieve greater portability for their applications. With such portability, teams can more easily move applications to another framework or platform. For others, being able to take advantage of vendor-specific features that save time on initial development is an acceptable trade-off for portability. Regardless of your point of view, a strategy that avoids vendor lock-in at all costs does mean that you will have to give up some unique vendor functionality.

In most cases, teams can still avoid vendor lock-in even without using multiple cloud providers. But how?

The key to staying flexible even within a single platform is about the choices you make. Building in degrees of tolerance and applying disciplined design decisions as a matter of strategy can ensure flexibility and portability down the road.

With this in mind, teams can work to abstract away vendor-specific functionality. Here are two simple examples:

  • Code level: Accessing functionality such as blob storage through an interface that could be implemented using any storage back-end (local storage, S3, Azure Storage, Google Cloud Storage, among other options). In addition to the flexibility this provides during testing, this tactic makes it easier for developers to port to a new platform if needed.
  • Containers: Containers and their orchestration tools are additional abstraction layers that can make workloads more flexible and portable.

Any technology decision represents some degree of lock-in, so organizations must weigh the pros and cons of depending too heavily on any single platform or tools.

So, is multi-cloud a really an effective strategy for avoiding vendor lock-in?

The bottom line is this: A multi-cloud strategy can help you avoid vendor lock-in, but it isn’t a requirement.

Implementing high availability and pursuing a best-fit technology approach are also frequently cited as a benefit of a multi-cloud strategy. But how do these hold up when it comes to real deployments and actual business cases?

This is just one of the questions that we’ll answer in our new whitepaper, Separating Multi-Cloud Strategy from Hype: An Objective Analysis of Arguments in Favor of Multi-Cloud.

You will learn:

  • The reality vs. hype of multi-cloud deployments
  • How to achieve high availability while avoiding vendor lock-in
  • The advantages of a best-fit technology approach
  • The arguments that should be driving your multi-cloud strategy

Discover the best approach for your multi-cloud strategy in our new whitepaper, download now. 

Discover the best approach for your multi-cloud strategy in our new whitepaper.

References: RightScale 2017 State of the Cloud Report | 2017 Stratoscale Hybrid Cloud Survey

Using Docker Machine with Azure

The content below is taken from the original ( Using Docker Machine with Azure), to continue reading please visit the site. Remember to respect the Author & Copyright.

I’ve written about using Docker Machine with a number of different providers, such as with AWS, with OpenStack, and even with a local KVM/Libvirt daemon. In this post, I’ll expand that series to show using Docker Machine with Azure. (This is a follow-up to my earlier post on experimenting with Azure.)

As with most of the other Docker Machine providers, using Docker Machine with Azure is reasonably straightforward. Run docker-machine create -d azure --help to get an idea of some of the parameters you can use when creating VMs on Azure using Docker Machine. A full list of the various parameters and options for the Azure drive is also available.

The only required parameter is --azure-subscription-id, which specifies your Azure subscription ID. If you don’t know this, or want to obtain it programmatically, you can use this Azure CLI command:

az account show --query "id" -o tsv

If you have more than one subscription, you’ll probably need to modify this command to filter it down to the specific subscription you want to use.

Additional parameters that you can supply include (but aren’t limited to):

  • Use the --azure-image parameter to specify the VM image you’d like to use. By default, the Azure driver uses Ubuntu 16.04.
  • By default, the Azure driver launches a Standard_A2 VM. If you’d like to use a different size, just supply the --azure-size parameter.
  • The --azure-location parameter lets you specify an Azure region other than the default, which is “westus”.
  • You can specify a non-default resource group (the default value is “docker-machine”) by using the --azure-resource-group parameter.
  • The Azure driver defaults to a username of “docker-user”; use the --azure-ssh-user to specify a different name.
  • You can customize networking configurations using the --azure-subnet-prefix, --azure-subnet, and --azure-vnet options. Default values for these options are 192.168.0.0/16, “docker-machine”, and “docker-machine”, respectively.

So what would a complete command look like? Using Bash command substitution to supply the Azure subscription ID, a sample command might look like this:

docker-machine create -d azure \
--azure-subscription-id $(az account show --query "id" -o tsv) \
--azure-location westus2 \
--azure-ssh-user ubuntu \
--azure-size "Standard_B1ms" \
dm-azure-test

This would create an Azure VM named “dm-azure-test”, based on the (default) Ubuntu 16.04 LTS image, in the “westus2” Azure region and using a username of “ubuntu”. Once the VM is running and responding across the network, Docker Machine will provision and configure Docker Engine on the VM.

Once the VM is up, all the same docker-machine commands are available:

  • docker-machine ls will list all configured machines (systems managed via Docker Machine); this is across all supported Docker Machine providers
  • docker-machine ssh <name> to establish an SSH connection to the VM
  • eval $(docker-machine env <name>) to establish a Docker configuration pointing to the remote VM (this would allow you to use a local Docker client to communicate with the remote Docker Engine instance)
  • docker-machine stop <name> stops the VM (which can be restarted using docker-machine start <name>, naturally)
  • docker-machine rm <name> deletes the VM

Clearly, there’s more available, but this should be enough to get most folks rolling.

If I’ve missed something (or gotten it incorrect), please hit me up on Twitter. I’ll happily make corrections where applicable.

Glucose-tracking smart contact lens is comfortable enough to wear

The content below is taken from the original ( Glucose-tracking smart contact lens is comfortable enough to wear), to continue reading please visit the site. Remember to respect the Author & Copyright.

The concept of a smart contact lens has been around for a while. To date, though, they haven't been all that comfortable: they tend to have electronics built into hard substrates that make for a lens which can distort your vision, break down and othe…

ITSM Connector for Azure is now generally available

The content below is taken from the original ( ITSM Connector for Azure is now generally available), to continue reading please visit the site. Remember to respect the Author & Copyright.

This post is also authored by Kiran Madnani, Principal PM Manager, Azure Infrastructure Management and Snehith Muvva, Program Manager II, Azure Infrastructure Management.

We are happy to announce that the IT Service Management Connector (ITSMC) for Azure is now generally available. ITSMC provides bi-directional integration between Azure monitoring tools and your ITSM tools – ServiceNow, Provance, Cherwell, and System Center Service Manager.

Customers use Azure monitoring tools to identify, analyze and troubleshoot issues. However, the work items related to an issue is typically stored in an ITSM tool. Instead of having to having to go back and forth between your ITSM tool and Azure monitoring tools, customers can now get all the information they need in one place. ITSMC will improve the troubleshooting experience and reduce the time it takes to resolve issues. Specifically, you can use ITSMC to:

  1. Create or update work-items (Event, Alert, Incident) in the ITSM tools based on Azure alerts (Activity Log Alerts, Near Real-Time metric alerts and Log Analytics alerts)
  2. Pull the Incident and Change Request data from ITSM tools into Azure Log Analytics.

You can setup ITSMC by following the steps in our documentation. Once set up, you can send Azure alerts to ITSM tool using the ITSM action in Action groups.

clip_image001

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

You can also view your incident and change request data in Log Analytics to perform trend analysis or correlate it against operational data.

image

 

 

 

 

 

 

 

 

 

 

To learn about the pricing, visit our pricing page. We are excited to launch ITSM connector and look forward to your feedback.

How to use the new Files Restore feature in OneDrive for Business

The content below is taken from the original ( How to use the new Files Restore feature in OneDrive for Business), to continue reading please visit the site. Remember to respect the Author & Copyright.

The OneDrive team at Microsoft just announced a new useful feature for the OneDrive for Business users. This feature is called as Files Restore. Sometimes when we are handling a large capacity cloud storage, there are chances that we may mess […]

This post How to use the new Files Restore feature in OneDrive for Business is from TheWindowsClub.com.

Acronis Releases a Free, AI-based Ransomware Protection Tool

The content below is taken from the original ( Acronis Releases a Free, AI-based Ransomware Protection Tool), to continue reading please visit the site. Remember to respect the Author & Copyright.

Acronis , a global leader in hybrid cloud data protection and storage, today released Acronis Ransomware Protection, a free, stand-alone version of… Read more at VMblog.com.

Windows 10 can now show you all the data it’s sending back to Microsoft

The content below is taken from the original ( Windows 10 can now show you all the data it’s sending back to Microsoft), to continue reading please visit the site. Remember to respect the Author & Copyright.

 Microsoft’s and its partners’ engineers use the telemetry data from Windows 10 to diagnose crashes, learn about its users hardware configurations and more. It’s on by default and while Microsoft tells you that it collects this data and gives you a choice between basic (the default setting) and “full” diagnostics, it never allowed you to actually see exactly what… Read More

Someone’s Made The Laptop Clive Sinclair Never Built

The content below is taken from the original ( Someone’s Made The Laptop Clive Sinclair Never Built), to continue reading please visit the site. Remember to respect the Author & Copyright.

The Sinclair ZX Spectrum was one of the big players in the 8-bit home computing scene of the 1980s, and decades later is sports one of the most active of all the retrocomputing communities. There is a thriving demo scene on the platform, there are new games being released, and there is even new Spectrum hardware coming to market.

One of the most interesting pieces of hardware is the ZX Spectrum Next, a Spectrum motherboard with the original hardware and many enhancements implemented on an FPGA. It has an array of modern interfaces, a megabyte of RAM compared to the 48k of the most common original, and a port allowing the connection of a Raspberry Pi Zero for off-board processing. Coupled with a rather attractive case from the designer of the original Sinclair model, and it has become something of an object of desire. But it’s still an all-in-one a desktop unit like the original, they haven’t made a portable. [Dan Birch has changed all that, with his extremely well designed Spectrum Next laptop.

He started with a beautiful CAD design for a case redolent of the 1990s HP Omnbook style of laptop, but with some Spectrum Next styling cues. This was sent to Shapeways for printing, and came back looking particularly well-built. Into the case went an LCD panel and controller for the Next’s HDMI port, a Raspberry Pi, a USB hub, a USB to PS/2 converter, and a slimline USB keyboard. Unfortunately there does not seem to be a battery included, though we’re sure that with a bit of ingenuity some space could be found for one.

The result is about as good a Spectrum laptop as it might be possible to create, and certainly as good as what might have been made by Sinclair or Amstrad had somehow the 8-bit micro survived into an alternative fantasy version of the 1990s with market conditions to put it into the form factor of a high-end compact laptop. The case design would do any home-made laptop design proud as a basis, we can only urge him to consider releasing some files.

There is a video of the machine in action, which we’ve placed below the break.

We’ve never bought you a laptop with a spectrum main board before, but we have brought you a recreated Sinclair in the form of this modern-day ZX80.

Quantum Computing Hardware Teardown

The content below is taken from the original ( Quantum Computing Hardware Teardown), to continue reading please visit the site. Remember to respect the Author & Copyright.

Although quantum computing is still in its infancy, enough progress is being made for it to look a little more promising than other “revolutionary” technologies, like fusion power or flying cars. IBM, Intel, and Google all either operate or are producing double-digit qubit computers right now, and there are plans for even larger quantum computers in the future. With this amount of inertia, our quantum computing revolution seems almost certain.

There’s still a lot of work to be done, though, before all of our encryption is rendered moot by these new devices. Since nothing is easy (or intuitive) at the quantum level, progress has been considerably slower than it was during the transistor revolution of the previous century. These computers work because of two phenomena: superposition and entanglement. A quantum bit, or qubit, works because unlike a transistor it can exist in multiple states at once, rather than just “zero” or “one”. These states are difficult to determine because in general a qubit is built using a single atom. Adding to the complexity, quantum computers must utilize quantum entanglement too, whereby a pair of particles are linked. This is the only way for any hardware to “observe” the state of the computer without affecting any qubits themselves. In fact, the observations often don’t yet have the highest accuracy themselves.

There are some other challenges with the hardware as well. All quantum computers that exist today must be cooled to a temperature very close to absolute zero in order to take advantage of superconductivity. Whether this is because of a reduction in thermal noise, as is the case with universal quantum computers based on ion traps or other technology, or because it is possible to take advantage of other interesting characteristics of superconductivity like the D-Wave computers do, all of them must be cooled to a critical temperature. A further challenge is that even at these low temperatures, the qubits still interact with each other and their read/write devices in unpredictable ways that get more unpredictable as the number of qubits scales up.

So, once the physics and the refrigeration are sorted out, let’s take a look at how a few of the quantum computing technologies actually manipulate these quantum curiosities to come up with working, programmable computers.

Wire Loops and Josephson Junctions

Arguably the most successful commercial application of a quantum computer so far has been from D-Wave. While these computers don’t have “fully-programmable” qubits they are still more effective at solving certain kinds of optimization problems than traditional computers. Since they don’t have the same functionality as a “universal” quantum computer, it has been easier for the company to get more qubits on a working computer.

The underlying principle behind the D-Wave computer is a process known as quantum annealing. Basically, the qubits are set to a certain energy state and are then let loose to return to their lowest possible energy state. This can be imagined as a sort of quantum Traveling Salesman problem, and indeed that is exactly how the quantum computer can solve optimization problems. D-Wave hardware works by using superconducting wire loops, each with a weakly-insulating Josephson junction, to store data via small magnetic fields. With this configuration, the qubit achieves superposition because the electrons in the wire loop can flow both directions simultaneously, where the current flow creates the magnetic field. Since the current flow is a superposition of both directions, the magnetic field it produces is also a superposition of “up” and “down”. There is a tunable coupling element at each qubit’s location on the chip which is what the magnetic fields interact with and is used to physically program the processor and control how the qubits interact with each other.

Because the D-Wave computer isn’t considered a universal quantum computer, the processing power per qubit is not equivalent to that which would be found in a universal quantum computer. Current D-Wave computers have 2048 qubits, which if it were truly universal would have mind-numbing implications. Additionally, it’s still not fully understood if the D-Wave computer exhibits true quantum speedup but presumably companies such as Lockheed Martin wouldn’t have purchased them (repeatedly) if there wasn’t utility.

There are ways to build universal quantum computers, though. Essentially all that is needed is something that exhibits quantum effects and that can be manipulated by an external force. For example, one idea that has been floated include using impurities found in diamonds. For now, though, there are two major ways that we will focus on that scientists have built successful quantum computers on: ion traps and semiconductors.

Ion Traps

In an ion trap, a qubit is created by ionizing an atom of some sort. This can be done in many ways, but this method using calcium ions implemented by the University of Oxford involves heating up a sample, shooting electrons at it, and trapping some of the charged ions for use in the computer. From there, the ion can be cooled to the required temperature using a laser. The laser’s wavelength is specifically chosen to resonate with the ion in such a way that the ion slows down to the point that its thermal fluctuations no longer impact its magnetic properties. The laser is also used to impart a specific magnetic field to the ion which is how the qubit is “programmed”. Once the operation is complete, the laser is again used to probe the ion and determine its state.

The problem of scalability immediately rears its head in this example, though. In order to have a large number of qubits, a large number of ions need to be trapped and simultaneously manipulated by a series of lasers. The fact that the qubits can influence each other adds to the problem, although this property can also be exploited to help read information out of the system. For reasons of complexity, it seems that the future of the universal quantum computer may be found in something we are all familiar with: silicon.

Semiconductors

Silicon, in its natural state, is actually an effective insulator. Silicon has four valence electrons which are all perfectly content to stay confined to a single nucleus which means there is no flow of charge, and therefore no current flow. To make something useful out of silicon like a diode or transistor which can conduct electricity in specific ways, silicon manufacturers infuse impurities in the silicon, usually boron or phosphorous atoms. This process of introducing impurities is called “doping” and imbues the silicon with an excess or deficit of electrons in the outer shells, which means that now there are charges present in the silicon lattice. These charges can be manipulated for all of the wonderful effects that we use to create our modern world.

But we can take this process of doping one step further. Rather than introducing a lot of impurities in the silicon, scientists have found a way to put a single impurity, a solitary phosphorus atom including its outermost electron, in a device that resembles a field-effect transistor. Using the familiar and well-understood behavior of these transistors, the single impurity becomes the qubit.

In this system, a large external magnetic field is applied in order to ensure that the electron is in a particular spin state. This is how the qubit is set. From there, the transistor can be used to read the state of this single electron. If the electron is in the “up” position, it will have enough energy to move out of the transistor and the device can register the remaining positive charge of the atom. If it is in the “down” position it will still be inside the transistor and the device will see a negative charge from the electron.

These (and some other) methods have allowed researchers to achieve long cohesion times within the qubit — essentially the amount of time that the qubit is in a relevant state before it decays and is no longer useful. In ion traps, this time is on the order of nano- or microseconds. In this semiconductor type, the time is on the order of seconds which is an eternity in the world of quantum computing. If this progress keeps up, quantum computers may actually be commonplace within the next decade. And we’ll just have to figure out how to use them.

Top 9 Frequently Asked Questions About Ripple and XRP

The content below is taken from the original ( Top 9 Frequently Asked Questions About Ripple and XRP), to continue reading please visit the site. Remember to respect the Author & Copyright.

The market interest about Ripple and XRP has reached a fever pitch, and naturally, people have questions about the company, the digital asset, how it’s used and where to buy it.

In order to clear up any misconceptions about Ripple and XRP, we’ve published answers to nine of the most frequently asked questions that the Ripple team has received. This list will be updated regularly as news and new developments unfold.

1. How do I buy XRP?
XRP is available for purchase on more than 60 digital asset exchanges worldwide, many of which are listed on this page. Please note that Ripple does not endorse, recommend, or make any representations with respect to the gateways and exchanges that appear on that page. Every exchange has a different process for purchasing XRP.

If you’ve already purchased XRP and have a question about your purchase, then please reach out to the exchange directly. In order to maintain healthy XRP markets, it’s a top priority for Ripple to have XRP listed on top digital asset exchanges, making it broadly accessible worldwide. Ripple has dedicated resources to the initiative so you can expect ongoing progress toward creating global liquidity.

2. What is the difference between XRP, XRP Ledger, and Ripple?
XRP is the digital asset native to XRP Ledger. The XRP Ledger is an open-source, distributed ledger. Ripple is a privately held company.

3. How many financial institutions have adopted XRP?
As of January 2018, MoneyGram and Cuallix — two major payment providers — have publicly announced their pilot use of XRP in payment flows through xRapid to provide liquidity solutions for their cross-border payments. Ripple has a growing pipeline of financial institutions that are also interested in using XRP in their payment flows.

4. How secure is XRP? Do I have to use exchanges?
The XRP Ledger is where XRP transactions occur and are recorded. The software that maintains the Ledger is open source and executes continually on a distributed network of servers operated by a variety of organizations. It’s an open-source code base that actively develops and maintains the ledger. Since XRP Ledger’s inception, we’ve worked to make the Ledger more resilient and resistant to a single point of failure through decentralization, by decentralizing it, a process that continues today.

To purchase XRP you must use an exchange or gateway and/or have a digital wallet. Ripple does not endorse, recommend, or make any representations with respect to gateways, exchanges, or wallets, but please see the list of exchanges that offer XRP here.

5. Is the XRP Ledger centralized?
This is a top misconception with the XRP Ledger. Centralization implies that a single entity controls the Ledger. While Ripple contributes to the open-source code of the XRP Ledger, we don’t own, control, or administer the XRP Ledger. The XRP Ledger is decentralized. If Ripple ceased to exist, the XRP Ledger would continue to exist.

Ripple has an interest in supporting the XRP Ledger for several reasons, including contributing to the longer-term strategy to encourage the use of XRP as a liquidity tool for financial institutions. Decentralization of the XRP Ledger is an ongoing a process that started right at its inception. inception and has been ongoing since. In May 2017, we publicly shared our decentralization strategy.

First, weannounced plans to continue to diversify validators on the XRP Ledger, which we expanded to 55 validator nodes in July 2017. We also shared plans to add attested validators to Unique Node Lists (UNLs), and announced over the course of 2017 and 2018, for every two attested third-party validating nodes that meet the objective criteria mentioned above, we will remove one validating node operated by Ripple, until no entity operates a majority of trusted nodes on the XRP Ledger.

We believe these efforts will increase the XRP Ledger’s enterprise-grade resiliency and robustness, leading to XRP’s continued adoption as the best digital asset for payments.

6. Which wallet should I use?
Ripple does not endorse, recommend, or make any representations with respect to digital wallets. It’s advisable to always conduct your own due diligence before trusting money to any third party or third-party technology.

7. Does the price volatility of XRP impact whether financial institutions adopt xRapid?
No. Ripple has a stable cache of financial institutions that are interested in piloting xRapid. Financial institutions who use xRapid don’t need to hold XRP for an extended period of time. What’s more, XRP settles in three to five seconds, which means financial institutions are exposed to limited volatility during the course of the transaction.

8. Can Ripple freeze XRP transactions? Are they able to view or monitor transactions?
No one can freeze XRP, including Ripple. All transactions on XRP Ledger are publicly viewable.

9. Can Ripple create more XRP?
No. Ripple the company didn’t create XRP; 100 billion XRP was created before the company was formed, and after Ripple was founded, the creators of XRP gifted a substantial amount of XRP to the company.

The post Top 9 Frequently Asked Questions About Ripple and XRP appeared first on Ripple.

html-tidy (5.6.0)

The content below is taken from the original ( html-tidy (5.6.0)), to continue reading please visit the site. Remember to respect the Author & Copyright.

Tidy is a console application for Mac OS X, Linux, Windows, UNIX, and more. It corrects and cleans up HTML and XML documents by fixing markup errors and upgrading legacy code to modern standards

External monitor not detected with Windows 10 laptop

The content below is taken from the original ( External monitor not detected with Windows 10 laptop), to continue reading please visit the site. Remember to respect the Author & Copyright.

If the external monitor is not working with your Windows 10 laptop or your Windows 10 PC is not detecting the second monitor, here are some solutions which may help you troubleshoot this problem. Laptop external monitor not detected Before […]

This post External monitor not detected with Windows 10 laptop is from TheWindowsClub.com.

Fitbit’s Xbox coaching app helps you work out between games

The content below is taken from the original ( Fitbit’s Xbox coaching app helps you work out between games), to continue reading please visit the site. Remember to respect the Author & Copyright.

If you're a gamer, you know it can be difficult to tear yourself away from the screen to get in some exercise. Fitbit, however, doesn't think you have to. It's trotting out an Xbox One version of its Coach app (released on mobile and PCs in the fal…

Difference between PowerShell and PowerShell Core

The content below is taken from the original ( Difference between PowerShell and PowerShell Core), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft has released PowerShell Core, a new version of PowerShell. This new version of PowerShell is available on all the major computing platforms including Windows, Linux, and MacOS. Well, the latest version of Windows 10, comes out of the box […]

This post Difference between PowerShell and PowerShell Core is from TheWindowsClub.com.

Curve’s payment-switching smart card goes live in the UK

The content below is taken from the original ( Curve’s payment-switching smart card goes live in the UK), to continue reading please visit the site. Remember to respect the Author & Copyright.

Like the thought of switching payment methods for a purchase long after you’ve left the store? You now have a chance to try it. Curve has launched its smart card in the UK, letting you not only consolidate your credit cards (of the Mastercard or Visa… (currently just Masterca…

Amazon launches autoscaling service on AWS

The content below is taken from the original ( Amazon launches autoscaling service on AWS), to continue reading please visit the site. Remember to respect the Author & Copyright.

 One of the primary advantages of cloud computing has always been the ability to scale up to meet short-term needs and scale back when that need has been met. In other words, you don’t have to pay for infrastructure that you don’t need to hedge against heavy usage. That has mostly meant server capacity, but over time it has been applied to other cloud services. Now, developers… Read More

Google tool lets you train AI without writing code

The content below is taken from the original ( Google tool lets you train AI without writing code), to continue reading please visit the site. Remember to respect the Author & Copyright.

In many ways, the biggest challenge in widening the adoption of AI isn’t making it better — it’s making the tech accessible to more companies. You typically need at least some programming to train a machine learning system, which rules it out for co… c…

The best mobile photo-editing apps

The content below is taken from the original ( The best mobile photo-editing apps), to continue reading please visit the site. Remember to respect the Author & Copyright.

There's no shortage of photo-editing apps for mobile devices. But if you want to graduate beyond Instagram filters, the sheer number of listings on the App Store or Google Play can be overwhelming. We've sifted through dozens to find the ones worth y…

Now Open – Third AWS Availability Zone in London

The content below is taken from the original ( Now Open – Third AWS Availability Zone in London), to continue reading please visit the site. Remember to respect the Author & Copyright.

We expand AWS by picking a geographic area (which we call a Region) and then building multiple, isolated Availability Zones in that area. Each Availability Zone (AZ) has multiple Internet connections and power connections to multiple grids.

Today I am happy to announce that we are opening our 50th AWS Availability Zone, with the addition of a third AZ to the EU (London) Region. This will give you additional flexibility to architect highly scalable, fault-tolerant applications that run across multiple AZs in the UK.

Since launching the EU (London) Region, we have seen an ever-growing set of customers, particularly in the public sector and in regulated industries, use AWS for new and innovative applications. Here are a couple of examples, courtesy of my AWS colleagues in the UK:

Enterprise – Some of the UK’s most respected enterprises are using AWS to transform their businesses, including BBC, BT, Deloitte, and Travis Perkins. Travis Perkins is one of the largest suppliers of building materials in the UK and is implementing the biggest systems and business change in its history, including an all-in migration of its data centers to AWS.

Startups – Cross-border payments company Currencycloud has migrated its entire payments production, and demo platform to AWS resulting in a 30% saving on their infrastructure costs. Clearscore, with plans to disrupting the credit score industry, has also chosen to host their entire platform on AWS. UnderwriteMe is using the EU (London) Region to offer an underwriting platform to their customers as a managed service.

Public Sector -The Met Office chose AWS to support the Met Office Weather App, available for iPhone and Android phones. Since the Met Office Weather App went live in January 2016, it has attracted more than half a million users. Using AWS, the Met Office has been able to increase agility, speed, and scalability while reducing costs. The Driver and Vehicle Licensing Agency (DVLA) is using the EU (London) Region for services such as the Strategic Card Payments platform, which helps the agency achieve PCI DSS compliance.

The AWS EU (London) Region has achieved Public Services Network (PSN) assurance, which provides UK Public Sector customers with an assured infrastructure on which to build UK Public Sector services. In conjunction with AWS’s Standardized Architecture for UK-OFFICIAL, PSN assurance enables UK Public Sector organizations to move their UK-OFFICIAL classified data to the EU (London) Region in a controlled and risk-managed manner.

For a complete list of AWS Regions and Services, visit the AWS Global Infrastructure page. As always, pricing for services in the Region can be found on the detail pages; visit our Cloud Products page to get started.

Jeff;

Open Banking is here to change how you manage your money

The content below is taken from the original ( Open Banking is here to change how you manage your money), to continue reading please visit the site. Remember to respect the Author & Copyright.

After completing a review of the retail banking sector back in the summer of 2016, the UK Competition and Markets Authority (CMA) concluded that stagnation had set in. It found that hardly anyone switches banks each year, and the huge financial insti…

Top 4 Disaster Risks for SMBs

The content below is taken from the original ( Top 4 Disaster Risks for SMBs), to continue reading please visit the site. Remember to respect the Author & Copyright.

Having a disaster recovery (DR) plan is essential for businesses of all sizes. However, it’s especially important for small and medium-sized businesses.

For a large business or an enterprise, a disaster is certainly a serious event. However, they have the reserves and the resources to weather outages – even extended outages and then resume operations. That’s not always the case for smaller and medium-sized businesses. For many smaller and medium-sized operations, an extended outage can be a catastrophic event from which they can never recover.

Unfortunately, preparing for these types of event is something that many small business overlook or simply don’t get around to discussing. Let’s look at five of the biggest disaster risks that are faced by small and medium-sized businesses.

  1. Physical disasters – Events like fires, hurricanes, tornados, and storms can physically damage your place of business or your inventory costing thousands of dollars in losses. These types of disasters almost always result in site-wide damage. Smaller businesses with only one or two locations are especially vulnerable because this can completely disrupt the ability to do business. According to the Federal Emergency Management Agency (FEMA), almost 40 percent of small businesses permanently close after a disaster. To protect your business from a potentially devastating property loss, it’s important to ensure that you have a DR plan in place as well as adequate site insurance coverage. As part of your DR plan, it is essential that files and data are backed up and there is at least one copy of the backup stored off-site or in the cloud. Site protection insurance can help smaller business shoulder the costs of repairing and restoring their primary place of operations.
  2. Hardware Failure – The most common disruptive disaster that hits almost all small and medium-sized business is a hardware failure. Hardware failure results in downtime and often the loss of income and productivity — all of which can have a huge impact on a small or medium-sized business. StorageCraft research showed that 99% of small to medium-sized business had experienced a hardware failure. Further, 80.9% of those hardware failures were hard drive failures. Here again, to minimize possible data loss and ensure minimal downtime it’s essential to have a backup plan to be able to recover your critical data following these types of common hardware failures.
  3. Malware and cyberattacks – Another serious risk for smaller and medium-sized business is the risk of malware and cyber attacks. Malware infections and cyber attacks can result in a number of different business disruptions including data theft, data corruption, and data deletion which can seriously impact businesses operations – even causing business shutdowns in some cases. One of the biggest ways to protect your business from malware and cyber attacks is to be sure to keep your systems patched with the most current security updates. According to Gartner, 99% of vulnerabilities exploited are known to security and IT professionals for at least one year. Unpatched systems are the openings that most of these exploits hit. Keeping your systems patched, implementing firewalls and Anti-Virus programs will go a long way toward safeguarding your business from malware and cyber attacks. A tested backup plan can ensure that you can restore your data before it was compromised.
  4. The unexpected loss of key personnel – Finally, one of the biggest risks that small businesses face is the loss of key personnel. Unlike large businesses and enterprises where there are almost always people available that can step-in in the event of the unexpected loss of key personnel, that isn’t usually the case for smaller business. The illness or death of an employee crucial to the functioning of your business could end day-to-day operations in a smaller business. While you can’t always completely protect against this cross-training for key operations can help. You can also consider taking out key employee insurance for anyone the business can’t do without.

The post Top 4 Disaster Risks for SMBs appeared first on Petri.