RISC OS on youtube

The content below is taken from the original ( RISC OS on youtube), to continue reading please visit the site. Remember to respect the Author & Copyright.

The video sharing site youtube is always an interesting place to find tutorials and see what other people are up to. There are some nice tutorials on how to update your RaspberryPi software, requests for help and people showing off their systems. There is plenty going on – what is your favourite link?

Latest RISC OS content on youtube can be from here.

No comments in forum

hub (2.8.3)

The content below is taken from the original ( hub (2.8.3)), to continue reading please visit the site. Remember to respect the Author & Copyright.

Command line wrapper for git that makes you better at GitHub.

12 Best Free Microsoft Store apps for Windows 10 – 2019

The content below is taken from the original ( 12 Best Free Microsoft Store apps for Windows 10 – 2019), to continue reading please visit the site. Remember to respect the Author & Copyright.

3D viewer Best Microsoft Windows App

3D viewer Best Microsoft Windows AppMicrosoft Store has some excellent apps. While most of us still use desktop software, we have curated a list of best free Microsoft Store apps for Windows 10. These apps range from learning category to editing images to the media server […]

This post 12 Best Free Microsoft Store apps for Windows 10 – 2019 is from TheWindowsClub.com.

New HPE SGI 8600 14PF Jean Zay Platform Announced

The content below is taken from the original ( New HPE SGI 8600 14PF Jean Zay Platform Announced), to continue reading please visit the site. Remember to respect the Author & Copyright.

A new HPE SGI 8600 14PF HPC platform announced in France. The GENCI Jean Zay cluster uses direct liquid cooling, Intel Xeon Scalable, OPA and Tesla V100’s

The post New HPE SGI 8600 14PF Jean Zay Platform Announced appeared first on ServeTheHome.

NGINX Has Modernized Full API Lifecycle Management

The content below is taken from the original ( NGINX Has Modernized Full API Lifecycle Management), to continue reading please visit the site. Remember to respect the Author & Copyright.

NGINX, Inc. , the company based on the popular open source project and offering a suite of technologies designed to develop and deliver modern… Read more at VMblog.com.

Sly Guy Nabs Pi Spy

The content below is taken from the original ( Sly Guy Nabs Pi Spy), to continue reading please visit the site. Remember to respect the Author & Copyright.

When one of [Christian Haschek’s] co-workers found this Raspberry Pi tucked into their network closet, he figured it was another employee’s experiment – you know how that goes. But, of course, they did the safe thing and unplugged it from the network right away. The ensuing investigation into what it was doing there is a tour de force in digital forensics and a profile of a bungling adversary.

A quick check of everyone with access to that area turned up nothing, so [Christian] shifted focus to the device itself. There were three components: a Raspberry Pi model B, a 16GB SD card, and an odd USB dongle that turned out to be an nRF52832-MDK. The powerful SoC on-board combines a Cortex M4 processor with the RF hardware for BLE, ANT, and other 2.4 GHz communications. In this case, it may have been used for sniffing WiFi or bluetooth packets.

The next step was investigating an image of the SD card, which turned out to be a resin install (now called balena). This is an IoT web service that allows you to collect data from your devices remotely via a secure VPN. Digging deeper, [Christian] found a JSON config file containing a resin username. A little googling provided the address of a nearby person with the same name – but this could just be coincidence. More investigation revealed a copyright notice on some mysterious proprietary software installed on the Pi. The copyright holder? A company part-owned by the same person. Finally, [Christian] looked into a file called resin-wifi-01 and found the SSID that was used to set up the device. Searching this SSID on wigle.net turned up – you guessed it – the same home address found from the username.

But, how did this device get there in the first place? Checking DNS and Radius logs, [Christian] found evidence that an ex-employee with a key may have been in the building when the Pi was first seen on the network. With this evidence in hand, [Christian] turned the issue over to legal, who will now have plenty of ammunition to pursue the case.

If you find the opportunity to do some Linux forensics yourself, or are simply interested in learning more about it, this intro by [Bryan Cockfield] will get you started.

This Rechargeable Cap Uses UV Rays to Kill Germs in Your Water Bottle

The content below is taken from the original ( This Rechargeable Cap Uses UV Rays to Kill Germs in Your Water Bottle), to continue reading please visit the site. Remember to respect the Author & Copyright.

A reusable water bottle is an eco-friendly way to make sure you’ve always got liquid refreshment within reach. Unfortunately, it’s easy to forget that they’re also a great breeding ground for bacteria. Those […]

The post This Rechargeable Cap Uses UV Rays to Kill Germs in Your Water Bottle appeared first on Geek.com.

Get This Ethical Hacking Training Bundle for 90 Percent Off

The content below is taken from the original ( Get This Ethical Hacking Training Bundle for 90 Percent Off), to continue reading please visit the site. Remember to respect the Author & Copyright.

Network security professionals play a big role in the fight against hackers. Want a career that stands for something? Then train to be one of these heroes with the Become an Ethical Hacker […]

The post Get This Ethical Hacking Training Bundle for 90 Percent Off appeared first on Geek.com.

Azure VM Image Builder Makes Customization of ISO and Marketplace Images Easier

The content below is taken from the original ( Azure VM Image Builder Makes Customization of ISO and Marketplace Images Easier), to continue reading please visit the site. Remember to respect the Author & Copyright.


Azure VM Image Builder is a new tool for Microsoft’s cloud that lets you provision ISO or Azure Marketplace images with your own customizations, like security settings or installed software.

Again, I’m talking Linux in today’s Ask the Admin. Not because I’ve moved over to the dark side but because Microsoft says Azure VM Image Builder will be made available for Windows Server at some point in the future, so it’s interesting to talk about it today. And let’s face it, Linux is everywhere and in many cases it is the best choice.

Until now, if you wanted to customize an image deployed to a virtual machine in the Azure cloud, you’d have to perform some post processing to make any changes. As I’ve showed you on Petri before, there are several ways of doing that, including Azure Automation DSC and Azure infrastructure-as-a-service JSON templates, or plain old PowerShell after the fact. None of these solutions are ideal however. Either because they are Windows-centric or don’t integrate properly into an image building pipeline. Azure Resource Manager (ARM) JSON templates come somewhere close, but it’s unique to Azure and is far from a simple exercise.

For more information on Azure Automation DSC, see Introduction to Azure Automation Desired State Configuration and Getting Started with Azure Automation Desired State Configuration on the Petri IT Knowledgebase. And here is the first part of my series on deploying Active Directory with Certificate Services in Azure using infrastructure-as-code.

Azure VM Image Builder Private Preview

Back in September, Microsoft announced a private preview of Azure VM Image Builder, which you can register for here. Image Builder lets you provision Ubuntu 16.04 or 18.04 ISO or Marketplace images and then customize them using your own shell scripts without requiring any additional infrastructure or setup in the cloud. Image Builder is based on HashiCorp Packer, so you can also import existing Packer scripts. Once customizations have been specified, you choose where to store the image, either in an Azure Image Shared Gallery or as an Azure Managed Image.

In the preview, Microsoft is supporting the following features:

  • Migrating an existing image customization pipeline to Azure. Import your existing shell scripts or Packer shell provisioner scripts.
  • Migrating your Red Hat subscription to Azure using Red Hat Cloud Access. Automatically create Red Hat Enterprise Linux VMs with your eligible, unused Red Hat subscriptions.
  • Integration with Azure Shared Image Gallery for image management and distribution.
  • Integration with existing CI/CD pipeline. Simplify image customization as an integral part of your application build and release process.

If you have an existing tool for building images, you can call the Image Builder API to integrate into your current process. During the preview, Microsoft isn’t supporting updating of existing custom images, but it is on the roadmap. And apart from the need to pay for any storage you use, Image Builder is free for the duration of the preview.

What is Packer?

Microsoft has based Azure VM Image Builder on HashiCorp Packer, which is an open source tool for creating identical images on different cloud platforms, meaning Packer scripts work on Azure as well as they do on Amazon. Packer uses Builders, Provisioners, and Post-Processors to create and provision custom images. Builders deploy images on different cloud platforms, like Azure and OpenStack. Provisioners configure VMs after they have booted, performing tasks like installing packages, patching the kernel, and creating users. A Provisioner might be a built-in technology, like PowerShell in the case of Windows Server, or a third-party tool like Puppet. Post-processors are optional and can be used to upload artifacts, re-package, or perform other tasks.

Simplifying image customization and integrating an open source solution is a good move on Microsoft’s part. While Image Builder is unlikely to be a free service once it reaches general availability, it looks like it will be easier to use than trying to deploy and configure VMs using ARM templates. As soon as support for Windows Server is added, I will provide a more detailed look at Image Builder on Petri.

The post Azure VM Image Builder Makes Customization of ISO and Marketplace Images Easier appeared first on Petri.

World’s Oldest Periodic Table Found in Scotland

The content below is taken from the original ( World’s Oldest Periodic Table Found in Scotland), to continue reading please visit the site. Remember to respect the Author & Copyright.

This is one major #TBT. A periodic table, found while clearing out a laboratory at the University of St Andrews in Scotland, is believed to be the oldest in the world. Experts have dated […]

The post World’s Oldest Periodic Table Found in Scotland appeared first on Geek.com.

Find Specialty Subreddits With This Tool

The content below is taken from the original ( Find Specialty Subreddits With This Tool), to continue reading please visit the site. Remember to respect the Author & Copyright.

The cool thing about Reddit is that you can subscribe to just the subreddits you like, and ignore everything you don’t. The smaller, more specialized subreddits are the best, but they’re harder to find. The new tool sayit helps you find them.

Read more…

Scratch: Free interactive tool to learn computer programming

The content below is taken from the original ( Scratch: Free interactive tool to learn computer programming), to continue reading please visit the site. Remember to respect the Author & Copyright.

Are you starting to learn code? The process might seem exorbitant, and you must have been advised to take small steps. But have you considered an interactive option before you dive into the technical concepts of programming? Scratch from MIT […]

This post Scratch: Free interactive tool to learn computer programming is from TheWindowsClub.com.

Azure IoT automatic device management helps deploying firmware updates at scale

The content below is taken from the original ( Azure IoT automatic device management helps deploying firmware updates at scale), to continue reading please visit the site. Remember to respect the Author & Copyright.

Automatic device management in Azure IoT Hub automates many of the repetitive and complex tasks of managing large device fleets over the entirety of their lifecycles. Since the feature shipped in June 2018, there has been a lot of interest in the firmware update use case. This blog article highlights some of the ways you can kickstart your own implementation.

Update the Azure IoT DevKit firmware over-the-air using automatic device management

The Azure IoT DevKit over-the-air (OTA) firmware update project is a great implementation of automatic device management. With automatic device management, you can target a set of devices based on their properties, define a desired configuration, and let IoT Hub update devices whenever they come into scope. This is performed using an automatic device configuration, which will also allow you to summarize completion and compliance, handle merging and conflicts, and roll out configurations in a phased approach. The Azure IoT DevKit implementation defines an automatic device configuration that specifies a collection of device twin desired properties related to the firmware version and image. It also specifies a set of useful metrics that are important for monitoring a deployment across a device fleet. The target condition can be specified based on device twin tags or device twin reported properties. The latter is particularly useful as it allows devices to self-report any prerequisites for the update.

OTA with Mongoose OS, an open source IoT Firmware Development Framework

In October 2018, our partner Cesanta announced support for automatic device management in Mongoose OS. Mongoose OS is an open source IoT Firmware Development Framework that is cross-platform and supports a variety of microcontrollers from top semiconductor companies. Mongoose OS provides reliable OTA updates, built-in flash encryption, and crypto chip support. It allows developers to have a quick and easy start with ready to go starter kits, solutions, libraries, and the option to code either in C or JavaScript.

“Mongoose OS is designed to simplify IoT firmware development for microcontrollers by helping developers to concentrate only the specific device logic while taking care of all the heavy lifting: security, networking, device control and remote management, including over-the-air updates. By working with Microsoft Azure IoT, Mongoose OS streamlines connected product development and provides a ready-to-go integration,” says CTO and Co-Founder at Cesanta Sergey Lyubka.

Firmware update deployment for operators using Azure IoT Remote Monitoring

Most recently, we released support for automatic device management in Azure IoT Remote Monitoring. Expanding on the firmware update implementation for the Azure IoT DevKit, this solution accelerator shows how automatic device management can be utilized by an operator role, in particular how a group of devices can be targeted for deployment and how the deployment can be monitored through metrics.

More resources

Begone, Demon Internet: Vodafone to shutter old-school pioneer ISP

The content below is taken from the original ( Begone, Demon Internet: Vodafone to shutter old-school pioneer ISP), to continue reading please visit the site. Remember to respect the Author & Copyright.

It was still going?

Exclusive Vodafone has confirmed it will shutter Demon Broadband, the pioneering Iron Age ISP, as part of its network upgrade plans.…

Even bicycles have Alexa now

The content below is taken from the original ( Even bicycles have Alexa now), to continue reading please visit the site. Remember to respect the Author & Copyright.

When I first clapped eyes on the Cybic E-Legend, I thought: "A bicycle with Alexa? What's the point?" It felt like an utterly pointless addition to a pedal-powered two-wheeler, electric or otherwise. But as I waddled around the bike at CES, I started…

New Azure Migrate and Azure Site Recovery enhancements for cloud migration

The content below is taken from the original ( New Azure Migrate and Azure Site Recovery enhancements for cloud migration), to continue reading please visit the site. Remember to respect the Author & Copyright.

We are continuously enhancing our offerings to help you in your digital transformation journey to the cloud. You can read more about these offerings in the blog, “Three reasons why Windows Server and SQL Server customers continue to choose Azure.” In this blog, we will go over some of the new features added to Microsoft Azure Migrate and Azure Site Recovery that will help you in your lift and shift migration journey to Azure.

Azure Migrate

Azure Migrate allows you to discover your on-premises environment and plan your migration to Azure. Based on popular demand, we have now enabled Azure Migrate in two new geographies, Azure Government and Europe. Support for other Azure geographies will be enabled in future.

Below is the list of regions within the Azure geographies where the discovery and assessment metadata is stored.

Geography Region for metadata storage
United States West Central US, East US
Europe North Europe, West Europe
Azure Government U.S. Gov Virginia

When you create a migration project in the Azure portal, the region for metadata storage is randomly selected. For example, if you create a project in the United States, we will automatically select the region as West Central US or East US. If you are specific about storing the metadata in a certain region in the geography, you can use our REST APIs to create the migration project and can specify the region accordingly in the API request.

Note, the geography selection does not restrict you from planning your migration for other Azure target regions. Azure Migrate allows you to specify more than 30 Azure target regions for migration planning. You can learn more by visiting our documentation, “Customize an assessment.”

Azure Site Recovery

Azure Site Recovery (ASR) helps you migrate your on-premises virtual machines (VMs) to IaaS VMs in Azure, this is the lift and shift migration. We are listening to your feedback and have recently made enhancements in ASR to make your migration journey even more smooth. Below is the list of enhancements recently done in ASR:

  • Support for physical servers with UEFI boot type: VMs with UEFI boot type are not supported in Azure. However, ASR allows you to migrate such on-premises Windows servers to Azure by converting the boot type of the on-premises servers to BIOS while migrating them. Previously, ASR supported conversion of boot type for only virtual machines. With the latest update, ASR now also supports migration of physical servers with UEFI boot type. The support is restricted to Windows machines only (Windows Server 2012 R2 and above).
  • Linux disk support: Previously, ASR had certain restrictions regarding directories in Linux machines, it required the directories such as /(root), /boot, /usr, and more to be on the same OS disk of the VM in order to migrate it. Additionally, it did not support VMs that had /boot on an LVM volume and not on a disk partition. With the latest update, ASR now supports directories in different disks and also supports /boot on an LVM volume. This essentially means, ASR allows migration of Linux VMs with LVM managed OS and data disks, and directories on multiple disks. You can learn more by visiting our documentation, “Support matrix for disaster recovery of VMware VMs and physical servers to Azure.”
  • Migration from anywhere: ASR helps you migrate any kind of server to Azure no matter where it runs, private cloud or public cloud. We are happy to announce that the guest OS coverage for AWS has now expanded, and ASR now supports the following operating systems for migration of AWS VMs to Azure.
Source OS versions
AWS
  • RHEL 6.5+ New
  • RHEL 7.0+ New
  • CentOS 6.5+ New
  • CentOS 7.0+ New
  • Windows Server 2016
  • Windows Server 2012 R2
  • Windows Server 2012
  • 64-bit version of Windows Server 2008 R2 SP1 or later

Learn more about how you can migrate from AWS to Azure in our documentation, “Migrate Amazon Web Services (AWS) VMs to Azure.”

VMware and physical servers Get more details on the supported OS versions by reading our documentation, “Support matrix for disaster recovery of VMware VMs and Physical servers to Azure.”
Hyper-V Guest OS agnostic

We are listening and continuously enhancing these services. If you have any feedback or have any ideas, do use our UserVoice forums for Azure Migrate and ASR and let us know.

If you are new to these tools, get started at the Azure Migration Center. Make sure you also start your journey right by taking the free Assessing and Planning for Azure Migration course offered by Microsoft.

Check Out These Essential Apps for Editing Photos on Your Phone

The content below is taken from the original ( Check Out These Essential Apps for Editing Photos on Your Phone), to continue reading please visit the site. Remember to respect the Author & Copyright.

“It’s probably the most fun time to be a photographer…in the history of photography.”

Read more…

Master Blockchain With These Easy Online Courses

The content below is taken from the original ( Master Blockchain With These Easy Online Courses), to continue reading please visit the site. Remember to respect the Author & Copyright.

Confused about cryptocurrencies? Demystify the technology that makes them tick with The 2019 Blockchain Developer Mastery Bundle, on sale now for just $19 — a savings of over 90% off the regular price. […]

The post Master Blockchain With These Easy Online Courses appeared first on Geek.com.

Alibaba acquires German big data startup Data Artisans for $103M

The content below is taken from the original ( Alibaba acquires German big data startup Data Artisans for $103M), to continue reading please visit the site. Remember to respect the Author & Copyright.

Alibaba has paid €90 million ($103 million) to acquire Data Artisans, a Berlin-based startup that provides distributed systems and large-scale data streaming services for enterprises.

The deal was first announced by European media, including EU-Startups, before being confirmed by both Alibaba and Data Artisansthrough blog posts.

Data Artisans was founded in 2014 by the team leading the development of Apache Flink, an open source large-scale data processing technology. The startup offers its own dA Platform, with open source Apache Flink and Application Manager, to enterprise customers that include

Netflix, ING, Uber and Alibaba itself.The Chinese e-commerce giant Netflix.Alibaba

has been working with Data Artisans since 2016, through support and open source work to help the architecture and performance of the software, both companies said in statements. Data Artisans is on record as raising $6.5 million billion over two rounds, most recently a Series A B in 2016 led by Intel Capital, but there was a seemingly unannounced Series B which closed last year and it looks like Alibaba was involved, according to a blog post from Data Artisans co-founders Kostas Tzoumas and Stephan Ewen.

Now Alibaba’s ownership — and you’d also presume, resources — can help the business reach “new horizons” with its open source technology, including moves to “expand to new areas that we have not explored in the past and make sure that Flink becomes a more valuable data processing framework for the modern data-driven, real-time enterprise,” the duo wrote.

“Moving forward together, data Artisans and Alibaba will not only continue, but accelerate contributions to Apache Flink and open source Big Data,” Tzoumas and Ewen added, explaining that Alibaba is one of Flink’s biggest users and contributors to the community.

To mark the new era, Alibaba has committed to providing its own in-house developments to Flink — which it calls Blink — to the community.

“By leveraging the technology expertise of both teams and shared passion to develop the open-source community, we are confident that this strategic tie-in will further strengthen the growth of the Flink community, accelerate the data-processing technologies and help bolster an open, collaborative and constructive environment for global developers who are passionate about stream processing and enabling real-time applications for modern enterprises,” said Jingren Zhou, vice president of Alibaba Group, in a statement.

This deal is reminiscent of Alibaba’s 2017 investment in MariaDB, an open source startup known for offering the most popular alternative to MySQL, a database management system. While not a full acquisition, the partnership has seen the two companies work together on new productsfor the community, and that’s also the goal here.

“Especially at times when many open source technologies and companies decide on a less collaborative and more “closed” approach, it is with great pleasure to see Alibaba committed to open source and our mission, eager to take Flink’s technological advancement to the next level,” Tzoumas and Ewen wrote in the announcement blog post.

Moving into open source and infrastructure tech makes sense for Alibaba, which is best known for e-commerce but also operates a cloud business, streaming services and more. With a net profit of $2.66 billion on revenue of $12.4 billion in its last quarter of business, the Chinese company certainly has plenty of money to pursue the strategy.

We’ve contacted Alibaba and Data Artisans with follow-up questions, and we hope to have more information on the deal soon. Please refresh for updates.

CloudReady Home Edition OS – Transform old PCs to a browsing center

The content below is taken from the original ( CloudReady Home Edition OS – Transform old PCs to a browsing center), to continue reading please visit the site. Remember to respect the Author & Copyright.

cloudready

As computer hardware grow old, it becomes difficult to run the latest version of Windows as it does not perform well. While there are a lot of alternatives, Google’s Chrome OS or rather Chromium OS works better on the old hardware. […]

This post CloudReady Home Edition OS – Transform old PCs to a browsing center is from TheWindowsClub.com.

How To Host a DNS Domain in Azure

The content below is taken from the original ( How To Host a DNS Domain in Azure), to continue reading please visit the site. Remember to respect the Author & Copyright.


Azure DNS

One of the important things you will do with any online service is to configure DNS. You obtain a DNS domain from a registrar and either host the domain with the registrar’s own hosting service or on your own public DNS servers.

People often don’t consider the impact of DNS on the performance on their online service. The first thing that a client (or potential customer) will do when browsing your site is to attempt to resolve the name of your service. So, if they browse to http://bit.ly/2GX5BWv the browser/operating system will attempt to convert that name into an IP address to connect to – the address might be hidden by several layers of abstraction (CNAMEs).

How fast that resolution happens impacts the overall performance of the site, and the longer a site takes to load, the less profitable it will be. Many DNS hosting services are located in one or a few data centers in a relatively small area. For example, I might host a DNS name in California. If a customer in the Western US browses the site, the name will resolve quickly and then the site can start to load. But if a customer in India attempts to browse to the site, the name is on the other side of the globe and it will take much longer for the name to resolve and the site to start loading – customer lost!

Azure DNS hosts your domain in Azure’s global network of data centers. That means that your domain is hosted all around the world, with automatic replication, and places the domain names closer to your potential customers. Using anycast networking, the client is redirected to the closest replica – now that client in India is redirected to an Azure DNS replica in India and the name resolves in milliseconds.

Other Benefits include:

  • Being an Azure service, Azure DNS can leverage Azure AD, auditing, governance, role-based access control (RBAC), and resource locking to secure your DNS service.
  • The admin experience is extremely simple – much easier than those “cpanels” that registrars use.
  • There is an internal DNS hosting option, but I find it a bit immature today. The external option, however, is awesome, in my opinion.

Create the Azure DNS Resource

Start off in the Azure Portal and click Create a Resource. Search for and select DNS Zone, and then click Create. Enter the following details in Create DNS Zone:

  • Name: The name of the DNS domain that you want to host.
  • Subscription: The subscription that you want to create the new resource in.
  • Resource group: The name of the resource group to create/use.
  • Resource group location: The Azure region of the resource group.

Creating a DNS hosting resource in Azure [Image Credit: Aidan Finn]
Creating a DNS hosting resource in Azure [Image Credit: Aidan Finn] Creating a DNS hosting resource in Azure [Image Credit: Aidan Finn]

The new DNS zone resource is created as a global resource, not dependent on any one region … in theory. If I was creating a DNS zone in Azure for a service that runs in North Europe and has failover to West Europe, I would host the DNS zone in a different resource group that is hosted in France Central … weird things can happen in huge clouds and I don’t like to take chances.

The resulting resource is pretty simple. You can add records and delete the zone. Speaking of which – you might want to add a Delete lock to the DNS zone resource.

A new DNS zone hosted in Azure [Image Credit: Aidan Finn]
A new DNS zone hosted in Azure [Image Credit: Aidan Finn] A new DNS zone hosted in Azure [Image Credit: Aidan Finn]

Note the highlighted name servers in the above screenshot. These are the names, resolvable by anycast, that the Internet will use to find the DNS servers for this DNS domain.

Modify Name Servers

At this time, the Internet has no idea about your new DNS hosting resource in Azure. It is time to change that. Browse to the control panel of your DNS registrar and log in. Browse through the maze of links until you find the option to manage your name servers. Change the registrar’s default name servers to the four Azure name servers.

Yes – Azure DNS is global and there are just four name servers. These names will use anycast to resolve to the closest replica of your globally replicated DNS zone.

Modifying the name servers for Azure DNS hosting [Image Credit: Aidan Finn]
Modifying the name servers for Azure DNS hosting [Image Credit: Aidan Finn] Modifying the name servers for Azure DNS hosting [Image Credit: Aidan Finn]

Tip: Don’t do this until you have created all the required DNS records in your new DNS zone. It also might take time for the TTL (caching period) of old records to expire and force a re-lookup to your new DNS records in Azure.

And that is it! Now the Internet will start to look to Azure DNS to resolve names in this domain. You can now go through the simple process of creating DNS records in the Azure Portal.

The post How To Host a DNS Domain in Azure appeared first on Petri.

cloneapp (2.09.399)

The content below is taken from the original ( cloneapp (2.09.399)), to continue reading please visit the site. Remember to respect the Author & Copyright.

CloneApp enables easy backup of all your app settings from Windows directories and Registry.

App migration checklist: How to decide what to move to cloud first

The content below is taken from the original ( App migration checklist: How to decide what to move to cloud first), to continue reading please visit the site. Remember to respect the Author & Copyright.

[Editor’s note: This post originally appeared on the Velostrata blog. Velostrata has since come into the fold at Google Cloud, and we’re pleased to now bring you their seasoned perspective on managing your cloud migration journey. There’s more here on how Velostrata’s accelerated migration technology works.]

When you’re considering a cloud migration, you’re likely considering moving virtual machines (VMs) that may have been created over many years, by many teams, to support a huge range of applications. Moving these systems without breaking any team’s essential applications may seem daunting. It’ll require some knowledge of the applications in question to classify those apps before setting your migration plan.

In a recent blog post, we talked about the four tiers you can use to help organize how you migrate your applications to the public cloud. We had a number of requests from that post, asking us to go a bit deeper on two important considerations: the application’s status and the application’s integrations and dependencies. In this post, we’ve put together a few more app-related questions that IT should be asking, alongside some of the likely answers. Of course, every enterprise and cloud migration is different. But the bold answers (and guidance highlighted answers (or notes) are likely to yield a stronger candidate for migration than others. 

If you find yourself in a situation where you don’t know (and cannot obtain) the answers, that might be a sign that this app isn’t a good candidate for migration. Sometimes knowing what you don’t know is a helpful gauge when deciding on a next step.

What’s the application status?

Here, we’re looking at all the components that factor into an application’s status within your organization’s landscape. These are some of the most important questions to evaluate.

What is the criticality of this application?
For example: How many users depend on it? What is the downtime sensitivity?

  • Tier 1 (highly important, 24×7 mission-critical)
  • Tier 2 (moderately important)
  • Tier 3 (low importance, dev/test)

What is the production level of this application?

  • In production
  • In staging
  • In development
  • In testing

What are the data considerations for this app?

  • Stateful data
  • Stateless data
  • Other systems reliant on this data set

How was this application developed?

  • Third-party purchase from major vendor (still in business?)
  • Third-party purchase from minor vendor (still in business?)
  • Written in-house (author still at company?)
  • Written by a partner (still in business? still a partner?)

What are this application’s operational standards?
For example: what organizational, business, or technological considerations exist?

  • Defined maintenance windows?
  • Defined SLAs?
  • Uptime-sensitive?
  • Latency-sensitive?
  • Accessed globally or regionally?
  • Deployed manually or via automation?

Guidance: Avoiding sensitive apps is often most desirable for a first migration.

What are the specific compliance or regulatory requirements?

  • ISO 27000?
  • PCI/DSS?
  • HIPAA?
  • EU Personal Data Protection?
  • GDPR?

Guidance: The fewer compliance or regulatory requirements, the better for a first migration.

What kind of documentation is readily available, and is it up-to-date?

  • System diagram?
  • Network diagram?
  • Data flow diagram?
  • Build/deploy docs?
  • Ongoing maintenance docs?

Guidance: The more docs that exist, the better!

What are the migration implications?

  • Easy to lift-and-shift as-is into the cloud
  • May require some refactoring
  • Need to modernize before migrating
  • Can wait to modernize after migrating
  • Need to rewrite in the cloud from scratch

Any business considerations?

  • Is this system used year-round or seasonally?
  • Is there a supportive line-of-business owner?
  • Does this app support an edge case or general use case?
  • Is this app managed by a central IT team or another team?
  • Would a downtime window be acceptable for this app?

Guidance: having more supportive owners and stakeholders is always crucial to the success of initial migrations.

What are the app integrations and dependencies?

Here, we’re going one step deeper, looking at how this application ties into all your other applications and workloads. This is hugely important, since you might want to group applications into the same migration sprint if they’re coupled together tightly through integrations or dependencies.

What are the interdependent applications?

  • SAP?
  • Citrix?
  • Custom or in-house apps?

Guidance: Fewer dependencies are ideal.

What are the interdependent workflows?

  • Messaging?
  • Monitoring?
  • Maintenance/management?
  • Analytics?

Guidance: Fewer dependencies are ideal.

Where is the database and storage located?

  • Separate servers?
  • Co-located servers?
  • Is storage block- or file-level?

Any other services to analyze?

  • Web services?
  • RPC used either inbound or outbound?
  • Backup services (and locations) in effect?

Guidance: None of these are more or less ideal, simply something to be aware of.

Other questions to ask:

  • Unique dependencies?
  • Manual processes required?
  • Synchronized downtime/uptime (with other apps)?

Guidance: The goal for first apps to migrate is to minimize complexity and labor.

Taking the time to truly understand your applications is a big part of success when migrating to the cloud. Picking the right applications to migrate first is key to building success and confidence within your organization in your cloud and migration strategy. Analyzing these details should help you and your IT team pick the right order for migrating your applications, which will be tantamount to achieving migration success

Find out more here about how cloud migration works with Velostrata.

Logitech adds desktop ad.

The content below is taken from the original ( Logitech adds desktop ad.), to continue reading please visit the site. Remember to respect the Author & Copyright.

On 2019/03/01, I came back to my desktop after a week away (W10 Pro x64 v1809). After logging in, I was greeted with an always-on-top ad for Logitech Capture https://puu.sh/CrlyK/cd7bc406a9.png . This ad appears to be spawned by Logitech Download Assistant, c:\windows\system32\logilda.dll, via a registry entry under Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Run, with the name of "Logitech Download Assistant", type REG_SZ, and Data C:\Windows\system32\rundll32.exe C:\Windows\System32\LogiLDA.dll,LogiFetch. A quick search indicates that "Logitech Capture" is an OBS competitor recently launched by Logitech, and that Logitech Download Assistant is a part of the Logitech drivers for their input devices (as opposed to the generic MS ones). Before I left, I had just added a Logitech C920 webcam,(preexisting Logitech devices: a G510 keyboard and a G502 mouse) and when I came back, I had a pending W10 Update labeled only as "Logitech Image". Rebooting my computer to finish the update resulted in the ad reappearing on my desktop. Disabling the Registry entry via Autoruns and rebooting resulted in the advertisement not appearing on my desktop.

submitted by /u/RasterTragedy to r/windows
[link] [comments]

RISC OS interview with Chris Williams

The content below is taken from the original ( RISC OS interview with Chris Williams), to continue reading please visit the site. Remember to respect the Author & Copyright.

For your Christmas treat this year, we have an interview with Chris Williams, of Drobe and The Register fame. Enjoy and a very Merry Christmas from Iconbar.

Would you like to introduce yourself?

I’m Chris Williams, former editor of RISC OS news and trouble-making website drobe.co.uk. The site’s frozen online right now as an archive because while I used to have a lot of free time to work on it, I graduated university in the mid-2000s, got a real job, and sadly ran out of spare time to maintain it, and so put it in stasis to preserve it. Today, I live and work in San Francisco, editing and writing articles for theregister.co.uk, mostly covering software and chips. I also once upon a time wrote some RISC OS applications, such as EasyGCC to help people build C/C++ projects, and a virtual memory manager that extended the machine’s RAM using swap space on disk. If you’re using RISC OS Select or 6, there’s some of my code in there, too, during boot up.

How long have you been using RISC OS?

Since 1992 when my parents bought an Acorn A5000. So I guess that’s about 26 years ago. We upgraded to a RiscPC as soon as we could. I took a StrongARM RPC crammed with add-ons, like an x86 card, IDE accelerator, Viewfinder graphics card, and Ethernet NIC, to uni, and got to know the OS really well. No other operating system I’ve used since has come close to the simplicity and ease-of-use of the RISC OS GUI, in my opinion. Apple’s macOS came really very close, and then the iGiant lost the plot on code quality.

What does RISC OS look like from the USA viewpoint?

It’s kinda like BeOS, in that operating system aficionados will know of it and appreciate it for what it is: an early operating system that had an intuitive user interface but was pushed under the wheels of Intel and Microsoft. Folks who experiment with RaspberryPis may also come across it, as it is one of the operating systems listed on raspberrypi.org. In conversation with Americans, or in writing articles, I normally introduce RISC OS as the OS Acorn made for its Arm desktop computers – y’know, Acorn. Acorn Computers. Britain’s Apple. The English Amiga. The ones who formed Arm, the people who make all your smartphone processor cores. And then the light bulb turns on.

What’s really interesting is what’s going on with Arm, and I think that will help, to some extent, RISC OS appear a little on more people’s radars. Anyone who’s been using RISC OS since the 1990s knows the pain of seeing their friends and colleagues having fun with their Windows PC games and applications, and their Intel and AMD processors, and graphics cards, and so on. Even though RISC OS had a fine user interface, and a decent enough set of software, and fun games, it just was for the most part, incompatible with the rest of the world and couldn’t quite keep up with the pace of competitors. It was hard seeing everything coalesce around the x86-Windows alliance, while Acorn lost its way, and Arm was pushing into embedded engineering markets.

Now, Arm is in every corner of our daily lives. It’s in phones, tablets, routers, smartcards, hard drives, Internet of Things, gadgets, servers, and even desktops. Microsoft is pushing hard on Windows 10 Arm-based laptops with multi-day battery life, at a time when Intel has got itself stuck in a quagmire of sorts. It blows my mind to go visit US giants like Qualcomm, and Arm’s offices in Texas, and see them focusing on Arm-based desktop CPUs, a technology initiative the Acorn era could really have done with. It’s just a little mindboggling, to me me anyway, to see Microsoft, so bent on dominating the desktop world with Windows on x86, to the detriment of RISC OS on Arm, now embracing Windows on Arm. I probably sound bitter, though I’m really not – I’m just astonished. That’s how life goes around, I guess.

Anyway, it’s perhaps something RISC OS can work with, beyond its ports to various interesting systems, if not targeting new hardware then catching attention as an alternative Arm OS. One sticking point is that Arm is gradually embracing 64-bit more and more. It’ll support 32-bit for a long while yet, but its latest high-end cores are 64-bit-only at the kernel level.

What other systems do you use?

I use Debian Linux on the desktop, and on the various servers I look after. I was an Apple macOS user as well for a while, though I recently ditched it. The software experience was getting weird, and the terrible quality of the latest MacBook Pro hardware was the final straw. Over the years, I’ve used FreeBSD and Debian Linux on various Arm chipsets, AMD and Intel x86 processors, and PowerPC CPUs, and even a MIPS32 system. I just got a quad-core 64-bit RISC-V system. I like checking out all sorts of architectures.

What is your current RISC OS setup?

I have a RaspberryPi 2 for booting RISC OS whenever I need it, though my primary environment is Linux. It’s what I use during work.

What is your favourite feature/killer program in RISC OS?

Back in the day, I couldn’t work without OvationPro, Photodesk, the terminal app Putty, StrongEd, BASIC for prototyping, GCC for software development, Director for organizing my desktop, Netsurf and Oregano, Grapevine… the list goes on.

What would you most like to see in RISC OS in the future?

Many, many more users. People able to access RISC OS more easily, perhaps using a JavaScript-based Arm emulator in a web browser to introduce them to the desktop.

What are your interests beyond RISC OS?

Pretty much making the most of living in California while I’m here, and traveling around the United States to visit tech companies and see what America has to offer. From Hawaii to Utah and Nevada to Texas, Florida and New York, and everything in between. I cycle a lot at the weekends, going over the Golden Gate Bridge and into normal Cali away from the big city, or exploring the East Bay ridge, returning via Berkeley. My apartment is a 15-minute walk from the office, so I tend to cycle a lot to get some exercise. When I was living in the UK, I ran about 48 miles a week, before and after work, which was doable in Essex and London where the streets and paths are flat. That’s kinda impossible in San Francisco, where the hills are legendarily steep. I’m happy if I can make it four or five miles.

I also do some programming for fun, mainly using Rust – which is like C/C++ though with a heavy focus on security, speed and multithreading. We really shouldn’t be writing application and operating system code in C/C++ any more; Rust, Go, and other languages are far more advanced and secure. C is, after all, assembly with some syntactic sugar. I’ve also been experimenting with RISC-V, an open-source CPU instruction set architecture that is similar to 64-bit Arm in that they have common roots – the original RISC efforts in the SF Bay Area in the early 1980s. The idea is: the instruction set and associated architecture is available for all to freely use to implement RISC-V-compatible CPU cores in custom chips and processors. Some of these cores are also open-source, meaning engineers can take them and plug them into their own custom chips, and run Linux and other software on them.

Western Digital, Nvidia, and other big names are using or exploring RISC-V as an alternative to Arm, which charges money to license its CPU blueprints and/or architecture. Bringing it all together, I’ve started writing a small open-source operating system, in my spare time, in Rust for RISC-V called Diosix 2.0 (http://bit.ly/2SqyVpH). Version 1.0 was a microkernel that ran on x86. The goal is to make a secure Rust-RISC-V hypervisor that can run multiple environments at the same time, each environment or virtual machine in its own hardware-enforced sandbox. That means you can do things like internet banking in one VM sandbox, and emails and Twitter browsing in another, preventing any malicious code or naughty stuff in one VM from affecting whatever’s running in another VM.

You can do all this on x86, Arm, and MIPS, of course. But given RISC-V was not bitten by the data-leaking speculative-execution design flaws (aka Meltdown and Spectre) that made life difficult for Intel, AMD, Arm, et al this year, and Rust is a lot safer than C/C++ that today’s hypervisors and operating systems are written in, I felt it was worth exploring. Pretty much every Adobe Flash, Windows, iOS, Android, macOS, Chrome, Safari, Internet Explorer, etc security update these days is due to some poor programmer accidentally blundering with their C/C++ code, and allowing memory to be corrupted and exploited to execute malicious code. Google made the language of Go, and Mozilla made the language of No: Rust refuses to build software that potentially suffers from buffer overflows, data races, and so on.

It also all helps me in my day job of editing and writing a lot – keeping up to date with chip design, software, security, and so on.

If someone hired you for a month to develop RISC OS software, what would you create?

To be honest, I’d try to find a way to transplant the RISC OS GUI onto other environments, so I can use the window furniture, contextual menus, filer, pinboard, iconbar, etc, on top of a base that runs on modern hardware. I think that would take longer than a month.

What would you most like Father Christmas to bring you as a present?

A larger apartment: rent is bonkers in San Francisco, so I could do with some extra space.

Any questions we forgot to ask you?

Why do vodka martinis always seem like a good idea 90 minutes before it’s too late to realize they were a bad idea?

PS: if anyone wants to get in touch, all my contact details are on diodesign.co.uk

You can read lots of other interviews on Iconbar here

2 No comments in forum