Datadog Acquires Logmatic.io, Adding Log Management to its Full-Stack Cloud Monitoring Platform

The content below is taken from the original (Datadog Acquires Logmatic.io, Adding Log Management to its Full-Stack Cloud Monitoring Platform), to continue reading please visit the site. Remember to respect the Author & Copyright.

Datadog
announced the acquisition of Logmatic.io, a Paris-based Operations Data
Platform for Log and Machine Events. The acquisition makes Datadog the
first vendor to offer Infrastructure Metrics, Application Performance
Monitoring (APM), and Log Management within a single platform. This
provides DevOps teams with the tools necessary to detect, diagnose, and
troubleshoot almost any problem in modern applications, significantly
reducing the total cost of application monitoring and time to resolution
in critical applications for enterprise customers.

Logmatic.io
was founded in 2014 by Emmanuel Gueidan, Renaud Boutet, and Amirhossein
Malekzadeh with the mission of helping organizations improve their
software and business performance. The platform analyzes billions of
data points daily with powerful log management and visualization tools
and counts marquee brands such as Accenture, BlaBlaCar, Canal+,
DailyMotion, and LVMH amongst its customers.

Earlier
this year, Datadog also announced the general availability of Datadog
APM (application performance monitoring), enabling organizations to
monitor every part of their service-oriented applications from
infrastructure to code.

“Integrating
logs with the APM and Infrastructure monitoring we already provide is
an important step in the evolution of the monitoring and analytics
category,” said Olivier Pomel, Chief Executive Officer of Datadog. “Our
new integrated platform will simplify the monitoring of modern
applications that often span clouds, containers, IoTs and mobile
devices. It will also unlock new A.I. and machine learning based
capabilities to help customers manage and improve their applications and
businesses.”

“Access
to the right machine data coming from the very core of business
operations will help drive success for our customers,” said Amirhossein
Malekzadeh, Co-Founder and CEO at Logmatic.io. “Over the last 3 years at
Logmatic.io, we saw a rapidly growing overlap in users and use-cases
between log management and monitoring platforms. Becoming a part of
Datadog was the best way to deliver a superior monitoring experience to
the rapidly evolving market.”

“We’re
already customers of both Datadog and Logmatic.io,” said Sylvain Barré,
Vice President of Scale at Dailymotion. “Having a single analytics,
monitoring, and alerting platform will dramatically improve our
productivity.”

The
Logmatic.io team has joined the Datadog Paris R&D office and an
integrated log management solution is currently available to select
customers. All current Logmatic.io customers will continue to have
access to their existing services, and will have a direct path for
migration upon the general availability of log management within
Datadog.

Convert Ps1 To Exe files using free software or online tool

The content below is taken from the original (Convert Ps1 To Exe files using free software or online tool), to continue reading please visit the site. Remember to respect the Author & Copyright.

PowerShell scripts are used for configuring various execution policies. They readily encrypt the files, and so, these files appear to be more secured against prying eyes. However, on Windows, the only native executable files are .exe and .dll. So, someone who is not much familiar with coding may find it difficult to execute the script. A tool that can convert a .Ps1 file into a .Exe file can make the process of sharing the scripts more practical. Ps1 To Exe comes across as a viable solution.

Ps1 To Exe is a simple yet useful application that offers developers a fair and quick method of converting PowerShell PS1 script files to the EXE format. The tool can generate 32-bit and 64-bit output files quickly. It also supports encryption and allows you to append additional items.

Convert Ps1 To Exe

Download the application and install it. The light weight of the application (3.1 MB size) surely adds to its advantage. Besides, it is also available as a portable application and can be uninstalled easily without leaving any traces on your PC.

When you first launch the application, its main screen displays options to convert .Ps1 file into .Exe file.

Convert Ps1 To Exe

Following the loading process of a PS1 script, you can customize various parameters associated with the output file. For instance,

  1. A user can encrypt the file
  2. Compress the file using UPX
  3. Generate a visible or invisible application that uses the 32-bit or 64-bit architecture

All you need to do is, specify the temporary data storage location and whether the file should be deleted automatically on exit or not. Other additional tabs that can be found in addition to ‘Options’ are,

  • Include – lets you edit the loaded scripts or add more info in the output files
  • Version Information – reveals more information about icon file
  • Editor – Easy to use the built-in editor to modify the imported PS1 files.
  • Program Settings – allows you to configure the desired language.

freeware

You can reset all entries at any time.

In all, Ps1 To Exe is a very dependable utility to convert PowerShell files to the EXE format. You can download it here.

Ps1 to Exe converter online

Apart from the standalone and portable application, there’s Ps1 To Exe Online Converter to help you convert PowerShell (.ps1) files to the EXE (.exe) format. Visit f2ko.de to use this online tool free.

The web application just requires you to browse the location of .Ps1 file, choose an architecture, specify visibility and enter a password (optional) for converting convert PowerShell (.ps1) files to the EXE (.exe) format, instantly.

OpenStack Developer Mailing List Digest September 2 – 8

The content below is taken from the original (OpenStack Developer Mailing List Digest September 2 – 8), to continue reading please visit the site. Remember to respect the Author & Copyright.

Successbot Says!

  • fungi: Octavia has migrated from Launchpad to StoryBoard [1]
  • sc’ : OpenStack is now on the Chef Supermarket! http://bit.ly/2xbI2mA [2]

Summaries:

  • Notifications Update Week 36 [3]

Updates:

  • Summit Free Passes [4]
    • People that have attended the Atlanta PTG or will attend the Denver PTG will receive 100% discount passes for the Sydney Summit
    • They must be used by October 27th
  • Early Bird Deadline for Summit Passes is September 8th [5]
    • Expires at 6:59 UTC
    • Discount Saves you 50%
  • Libraries Published to pypi with YYY.X.Z versions [6]
    • Moving forward with deleting the libraries from Pypi
    • Removing these libraries:
      • python-congressclient 2015.1.0
      • python-congressclient 2015.1.0rc1
      • python-designateclient 2013.1.a8.g3a2a320
      • networking-hyperv 2015.1.0
    • Still waiting on approval from PTL’s about the others
      • mistral-extra
      • networking-odl
      • murano-dashboard
      • networking-midonet
      • sahara-image-elements
      • freezer-api
      • murano-agent
      • mistral-dashboard
      • Sahara-dashboard
  • Unified Limits work stalled [7]
    • Need for new leadership
    • Keystone merged a spec [8]
  • Should we continue providing FQDN’s for instance hostnames?[9]
    • Nova network has deprecated the option that the domain in the FQDN is based on
    • Working on getting the domain info from Neutron instead of Nova, but this may not be the right direction
    • Do we want to use a FQDN as the hostnames inside the guest?
      • The Infra servers are built with the FQDN as the instance name itself
  • Cinder V1 API Removal[10]
    • Patch here[11]
  • Removing Screen from Devstack- RSN
    • It’s been merged
    • A few people are upset that they don’t have screen for debugging anymore
    • Systemd docs are being updated to include pdb path so as to be able to debug in a similar way to how people used screen [12] [13] [14]

PTG Planning

  • Video Interviews [15]

 

[1] http://bit.ly/2eLl1wd

[2] http://bit.ly/2xbI2D6

[3] http://bit.ly/2eLCcOj

[4] http://bit.ly/2xbI3XG #Free

[5] http://bit.ly/2eLCBQL

[6] http://bit.ly/2xbI4ec #YYYY

[7] http://bit.ly/2eLl1MJ

[8] http://bit.ly/2xbI4uI

[9] http://bit.ly/2eLWCqi

[10] http://bit.ly/2xbnB9c

[11] http://bit.ly/2eLl23f

[12] http://bit.ly/2eLl2jL/

[13] http://bit.ly/2xbI51K

[14] http://bit.ly/2eLl2Ah

[15] http://bit.ly/2xbnBpI

Fitbit’s Ionic smartwatch will help diabetics track glucose levels

The content below is taken from the original (Fitbit’s Ionic smartwatch will help diabetics track glucose levels), to continue reading please visit the site. Remember to respect the Author & Copyright.

Fitbit is pairing up with Dexcom, a company that creates continuous glucose monitoring (CGM) devices for people with diabetes. In an announcement today, the companies say that their first initiative is to bring Dexcom’s monitoring device data to Fitbit’s new Ionic smartwatch.

For those unfamiliar, Dexcom’s CGM devices work with a sensor that sits just under the skin and measures a person’s glucose levels every few minutes in order to provide them with a bigger picture of where their glucose levels are and where they’re heading. As of now, a transmitter attached to that sensor lets you see readouts of those levels on a smartphone or even an Apple Watch, but soon you’ll also be able to see them on Ionic’s screen.

Dexcom and Fitbit say they’re hoping to get this feature available to Ionic users in 2018 and are working to develop other diabetes management tools in the future. "We believe that providing Dexcom CGM data on Fitbit Ionic, and making that experience available to users of both Android and iOS devices, will have a positive impact on the way people manage their diabetes," said Dexcom’s CEO, Kevin Sayer, in a statement.

Source: Fitbit and Dexcom

Huawei developing NMVe over IP SSD

The content below is taken from the original (Huawei developing NMVe over IP SSD), to continue reading please visit the site. Remember to respect the Author & Copyright.

Huawei developing NMVe over IP SSD

Seagate Kinetic drive idea had Huawei genesis and has Huawei follow-on

Seagate_8TB_HDD_trio

Kinetic re-invented by Huawei with flash drives

Analysis Huawei is developing an NVMe over IP SSD with an on-drive object storage scheme meaning radically faster object storage and a re-evaluation of what object storage’s very purpose.

At the Huawei Connect 2017 event in Shanghai, Guangbin Meng, Storage Product Line President for Huawei, told El Reg Huawei is developing an NVMe over IP SSD to overcome in-system scaling limits. Such an SSD could enable an all-flash array to scale up to tens of thousands of nodes. Each drive would have its own IP address.

This reminds El Registero of Seagate’s Kinectic disk drive idea, in which each disk drive has its own Ethernet address and implements an object storage scheme with Get and Put data access operators. The Kinetic disk concept seems to have run its course inside Seagate and facing oblivion. However startup OpenIO is persevering with object storage disks. Igneous has similar ideas with Ethernet-Accessed, ARM CPU-driven disks.

Cameron Bahar, VP and CTO for Enterprise Storage at Huawei’s Santa Clara facility, says Huawei actually had the Kinetic drive idea before Seagate. He explains that Garth Gibson, the man who developed RAID proposed a NAS-attached storage disk in a pre-2000 paper. He then went to Panasas as CTO and tried to develop the idea but it didn’t get any traction.

In 2012 Huawei built an object store in its Santa Clara facility using key:value disks. Each disk had a daughter card attached to it with an ARM CPU, DRAM and Linux. The device was even patented.

However a Huawei employee went to Seagate and persuaded Seagate to develop its Kinetic disk. The drives there already had ARM CPUs inside them looking after the internal disk drive functions, Bahar said the extra cost for a Kinetic drive, the Kinetic tax, was US$20. That’s quite high, especially for those who buy disks by the thousand as you would when building the kind of scale-out rigs at which object storage excels.

Back to Huawei and its NVMe over IP concept for flash drives; SSDs. Huawei had the idea of combining this with the on-drive object store and creating an object storage system composed from individual flash drive nodes. The object storage software is Huawei’s own and involves a key:value store on each drive.

Bahar talked about two kinds of object store; cheap and deep S3-style which is the traditional idea, and fast object stores based on key:value store flash drive nodes.

What happens if we have very fast object stores? Let’s put them in an array and add a NAS gateway. We then have a flash-based and vastly-scalable filer. There could be different types of flash; faster and slower drives for example. How enterprises or cloud service providers could use such an array is a fascinating question. We might imagine video production, security surveillance systems, big data analytics and AI/machine learning applications might have an interest in it.

It’s only Huawei at present, with a developing and dramatically faster object storage system based on NVME over IP-accessed flash drive nodes. It’s a promising concept that could find traction. Let’s see where it goes. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

The Trainline is using big data to predict when ticket prices rise

The content below is taken from the original (The Trainline is using big data to predict when ticket prices rise), to continue reading please visit the site. Remember to respect the Author & Copyright.

Just like flights, hotels and Uber rides, train ticket prices rise when rail operators anticipate there will be increased demand for their services. It often leaves passengers scrambling to find a combination of tickets that won’t break the bank, even when they’re booking weeks before they travel. Train companies obviously want to protect profits by keeping their pricing structures a secret, but independent ticket retailer The Trainline believes it can now accurately predict when things will start to get expensive.

Appearing in The Trainline app from today, the Price Prediction tool uses the company’s "billions of data points" to suggest when ticket prices will rise. When a user performs a search for a specific route, the app will list the cheapest price, indicate when the route could sell out and then list incremental price changes depending on the date of booking.

"Our data scientists have used historical pricing trends from billions of customer journey searches to predict when the price of an Advance ticket will expire. We now share this information in our app to allow our customers to get the best price possible for their journey," Jon Moore, Chief Product Officer at Trainline said. "We’re introducing more advanced machine learning every day so naturally our predictions will get increasingly accurate. Our mission is to make train travel as simple as possible and price prediction is the first in a long line of predictive features we have planned to help customers save time and money."

Trainline Train Prices

To demonstrate its tool, the company took an Advance single fare between London Euston and Manchester Piccadilly. If booked 80 days before the day of travel, that ticket will cost £32, rising to £38 at 41 days before. Wait until 13 days before and the price rises again to £42, then more than doubles 48 hours beforehand. On the day, that 160 mile journey will cost a whopping £126.

The feature isn’t particularly groundbreaking, lots of companies provide similar tools for travel and accommodation all over the world. However, The Trainline is the UK’s biggest independent ticket retailer and processes 125,000 customer journeys across Europe every day, meaning it has a better understanding of the nation’s train travel habits than most.

Booking months in advance will always deliver the most value, but if you’re umming and ahhing over whether to book that ticket, the company’s new tool could provide some further clarity on the situation.

Source: Trainline (App Store), (Google Play)

British Library exhibit to highlight the sounds it’s fighting to save

The content below is taken from the original (British Library exhibit to highlight the sounds it’s fighting to save), to continue reading please visit the site. Remember to respect the Author & Copyright.

Last year, the British Library began the "Save our Sounds" project, with the aim of accelerating the digitisation of millions upon millions of lost audio recordings held in its vast archive. The collection includes many rare and previously unreleased recordings of everything from speeches and music to wildlife, street sounds and pirate radio broadcasts. In some respects, it’s a race against the clock. Time is taking its toll on ancient formats like the wax cylinder, for example, and the equipment needed to play some formats is extremely hard to come by. There’s much to be done, but next month the British Library is celebrating achievements thus far with a free exhibition that "will explore how sound has shaped and influenced our lives since the phonograph was invented in 1877."

"Listen: 140 Years of Recorded Sound" runs from October 6th to March 11th, 2018, and complements the 90,000-plus recordings the British Library has already preserved and made available online. The exhibition, which’ll be joined by various events, will include curiosities such as the "wireless log" of Alfred Taylor, who at 16 years of age, recorded the 1922 equivalent of a vlog. Key moments in sound will be celebrated, such as the formation of the BBC and pop charts, and artifacts such as rare records, players and recording equipment will also be on display, "exploring how technology has transformed our listening experience."

Source: British Library

Bye Bye Solaris, it seems.

The content below is taken from the original (Bye Bye Solaris, it seems.), to continue reading please visit the site. Remember to respect the Author & Copyright.

For readers of A Certain Age, this may bring a tear to the eye. Reports have been circulating of the decision by Oracle to lay off a significant portion of the staff behind its Solaris operating system and SPARC processors, and that move spells the inevitable impending demise of those products. They bore the signature of Sun Microsystems, the late lamented workstation and software company swallowed up by the database giant in 2009.

So why might we here at Hackaday be reaching for our hankies over a proprietary UNIX flavour and a high-end microprocessor, neither of which are likely to be found on many of the benches of our readers in 2017? To answer that it’s more appropriate to journey back to the late 1980s or early 1990s, when the most powerful and expensive home computers money could buy were still connected to a domestic TV set as a monitor.

If you received a technical education at a university level during that period the chances are that you would have fairly soon found yourself sitting in a lab full of workstations, desktop computers unbelievably powerful by the standards of the day. With very high resolution graphics, X-windows GUIs over UNIX, and mice that weren’t just used for a novelty paint package, these machines bore some resemblance to what we take for granted today, but at a time when an expensive PC still came with DOS. There were several major players in the workstation market, but Sun were the ones that seemed to have the university market cracked.

You never forget your first love, and therefore there will be a lot of people who will never quite shake that association with a Sun workstation being a very fast desktop computer indeed. Their mantra at the time was “The network is the computer”, and it is the memory of a significant part of a year’s EE students trolling each other by playing sound samples remotely on each other’s SPARCStations on that network that is replaying in the mind of your scribe as this is being written.

A Raspberry Pi with a Raspbian desktop probably outperforms one of those 1980s SPARCStations in every possible way, but that is hardly the point and serves only to demonstrate technological progress. It feels as though something important died today, even if it may be a little difficult to remember what it was when sat in front of a multi-core x86 powerhouse with a fully open-source 64-bit POSIX-compliant operating system running upon it.

Unsurprisingly we’ve featured no hardware hacks with such high-end computing. If you’d like to investigate some Sun Microsystems hardware though, take a look at the Centre for Computing History’s collection.

Uber Movement’s traffic data is now available to the public

The content below is taken from the original (Uber Movement’s traffic data is now available to the public), to continue reading please visit the site. Remember to respect the Author & Copyright.

Back in January, Uber announced that it’s giving urban planners access to a website with traffic data of their cities. Now that website is out of beta, and anybody can access it anytime. The Uber Movement website can show you how long it takes to get from one part of a city to another based on the day of the week and the time of day. People like you and me can consult it for realistic travel times, since its data came from actual Uber trips. However, its real purpose is to help city officials and planners figure out how to improve their transit systems.

Despite its good intentions and the anonymized data, the project wasn’t met with the warm reception Uber expected. The company has a pretty bad track record when it comes to privacy, after all. If you’ll recall, the New York Attorney General’s office discovered a few years ago that Uber’s corporate employees could track passengers’ rides and logs of their trips through the "God View" app. Uber had to purge riders’ identifiable info from its system and limit the app’s access to settle the probe. More recently, the ride-hailing firm had to change an app setting that tracked customers five minutes after their rides end after pressure from privacy groups.

At this point in time, Uber Movement only has data on Boston, Washington DC, Manila and Sydney. If it doesn’t want to put the project in jeopardy — because it will add more cities in the future — it has to be very, very careful with the data it collects.

Via: TechCrunch

Source: Uber Movement

Amazon Announces Updates to AppStream 2.0, including Domain Join, Simple Network Setup, and More

The content below is taken from the original (Amazon Announces Updates to AppStream 2.0, including Domain Join, Simple Network Setup, and More), to continue reading please visit the site. Remember to respect the Author & Copyright.

In a recent post on the AWS blog, Amazon announced several updates to their application streaming service, AppStream 2.0. These updates span several different areas related to application streaming, such as domain management, file storage, and audio streaming.

One of the updates included in the announcement is a feature called “Domain Join”, which allows users to connect their AppStream 2.0 streaming instances to a Microsoft Active Directory domain. With this feature, existing Active Directory policies can be applied to one’s streaming instances, enabling application users to use single sign on to access to a variety of devices and services across an internal network.

Related to the new “Domain Join” feature in AppStream 2.0, users can make use of the new user management features found within the service’s web portal. This includes the ability to create and manage user accounts, send “welcome” emails, and provides access to whichever applications a user has access to.

Another new feature that Amazon announced is the ability to store files for use in future AppStream 2.0 sessions, which can be useful to those who want to simply pick back up where they left off within an application. Files that are stored will be made available at the beginning of a user’s next session and any updates to these files are periodically synced back to the Amazon S3 folder that is automatically created when this persistent storage feature is enabled. It’s important to note, however, that users of this service are required to pay for the storage that they use, at the standard Amazon S3 rates.

Amazon also announced a simpler way to set up and configure network and internet access for streaming instances and for AppStream’s image builder. This simplified network setup process can be accessed right from the AppStream console by selecting a public subnet from a VPC that has at least one subnet available. Users can also configure custom security groups for VPCs, which can then be applied to image builders and fleets. No longer do users have to configure NAT gateways or traffic routing rules, which eliminates a great deal of the complexity involved in configuring application streaming, thus saving an organization time and money.

Another update included in the announcement was that AppStream 2.0 now supports both analog and digital microphones, mixing consoles, and other audio input devices for use with streaming applications. This feature can be enabled by going to the AppStream 2.0 toolbar and choosing the “Enable Microphone” option.

All of the above features are available in Amazon AppStream 2.0 as of August 23rd, and can be used in all AWS regions where AppStream 2.0 is available.

While application streaming can make distributing and maintaining software within an organization easier, it’s not without its share of complexities. However, services like Amazon AppStream 2.0 can make this process easier on admins, as they can easily manage all aspects of the application streaming process, providing users with up-to-date versions of the applications they need, wherever they may be. And with Amazon’s recent updates to AppStream 2.0, the application streaming process has gotten even easier for admins to configure and manage, especially for those in large organizations where application streaming makes more sense than a traditional local software installation.

The post Amazon Announces Updates to AppStream 2.0, including Domain Join, Simple Network Setup, and More appeared first on Petri.

Deadline 10 – Launch a Rendering Fleet in AWS

The content below is taken from the original (Deadline 10 – Launch a Rendering Fleet in AWS), to continue reading please visit the site. Remember to respect the Author & Copyright.

Graphical rendering is a compute-intensive task that is, as they say, embarrassingly parallel. Looked at another way, this means that there’s a more or less linear relationship between the number of processors that are working on the problem and the overall wall-clock time that it takes to complete the task. In a creative endeavor such as movie-making, getting the results faster spurs creativity, improves the feedback loop, gives you time to make more iterations and trials, and leads to a better result. Even if you have a render farm in-house, you may still want to turn to the cloud in order to gain access to more compute power at peak times. Once you do this, the next challenge is to manage the combination of in-house resources, cloud resources, and the digital assets in a unified fashion.

Deadline 10
Earlier this week we launched Deadline 10, a powerful render management system. Building on technology that we brought on board with the acquisition of Thinkbox Software, Deadline 10 is designed to extend existing on-premises rendering into the AWS Cloud, giving you elasticity and flexibility while remaining simple and easy to use. You can set up and manage large-scale distributed jobs that span multiple AWS regions and benefit from elastic, usage-based AWS licensing for popular applications like Deadline for Autodesk 3ds Max, Maya, Arnold, and dozens more, all available from the Thinkbox Marketplace. You can purchase software licenses from the marketplace, use your existing licenses, or use them together.

Deadline 10 obtains cloud-based compute resources by managing bids for EC2 Spot Instances, providing you with access to enough low-cost compute capacity to let your imagination run wild! It uses your existing AWS account, tags EC2 instances for tracking, and synchronizes your local assets to the cloud before rendering begins.

A Quick Tour
Let’s take a quick tour of Deadline 10 and see how it makes use of AWS. The AWS Portal is available from the View menu:

The first step is to log in to my AWS account:

Then I configure the connection server, license server, and the S3 bucket that will be used to store rendering assets:

Next, I set up my Spot fleet, establishing a maximum price per hour for each EC2 instance, setting target capacity, and choosing the desired rendering application:

I can also choose any desired combination of EC2 instance types:

When I am ready to render I click on Start Spot Fleet:

This will initiate the process of bidding for and managing Spot Instances. The running instances are visible from the Portal:

I can monitor the progress of my rendering pipeline:

I can stop my Spot fleet when I no longer need it:

Deadline 10 is now available for usage based license customers; a new license is needed for traditional floating license users. Pricing for yearly Deadline licenses has been reduced to $48 annually. If you are already using an earlier version of Deadline, feel free to contact us to learn more about licensing options.

Jeff;

Use shared USB over network remotely with USB Redirector Client

The content below is taken from the original (Use shared USB over network remotely with USB Redirector Client), to continue reading please visit the site. Remember to respect the Author & Copyright.

Flexible USB sharing over a network enables others on the same network to get remote access to a single external drive. This is done via software that helps emulate all the contents of the drive on the client side, creating the exact virtual copy of the shared hardware USB device. It appears as if the device was attached directly to other computers on the same network. This article will help you set up and share your USB storage device through USB Redirector Cient– a powerful solution for remoting USB Devices.

USB Redirector Client Free

USB Redirector is a useful software to use shared USB devices remotely through a LAN, WLAN or Internet, just as if they were attached to your computer directly. The light-weight version of the application – USB Redirector Client can be used to redirect devices between Windows-based computers. It’s completely free for use.

For using this application, install USB Redirector on the main computer. This computer will act as USB server.

Use shared USB over network remotely

Please note that when USB device is shared, it cannot be used locally, because it is acquired for individual usage by remote USB clients! To make the device available locally again, unshare it.

When done, install USB Redirector Client on a PC where you need to use USB devices remotely. This will be your USB client.

Now, establish a direct connection from USB client to USB server or callback connection from USB server to USB client.

USB Redirector Client

From the list of available USB devices appearing on the screen, select the required one and hit the ‘Connect’ button.

USB over network

Now on a remote PC, you can work with the USB device.

A unique feature about USB Redirector is that the application works as a background service, so you do not need to keep the app open all the time.

Once you have configured all the necessary options, you can safely close it. Moreover, you can add certain USB devices to the ‘Exclusion List’ as an extra measure of precaution, against virus infections.

You can download USB Redirector Client Free for Windows from its home page. It is free when connecting from a Windows computer.

Yamaha’s smart pianos work with Alexa and teach you how to play

The content below is taken from the original (Yamaha’s smart pianos work with Alexa and teach you how to play), to continue reading please visit the site. Remember to respect the Author & Copyright.

Of the many things we expected to see at IFA 2017, cutting-edge instruments weren’t one of them. But Yamaha is using its time in Berlin to showcase the Clavinova all-electric, smart pianos, which use an iOS device and LEDs above each key to teach you how to play. With the Smart Pianist application, which will also be available on Android next year, you can learn how to play tracks in real-time thanks to blue and red lights that will come on every time you’re supposed to hit a key. (Red LEDs are placed above white keys, blue above the black ones.) Not only that, but if you can read music, there’s a chord chart being displayed on the iPad in real-time for whatever song you’re playing.

In terms of Alexa compatibility, Amazon’s virtual assistant isn’t built into the Clavinova smart pianos. Instead, you’re able to trigger different commands by plugging something like an an Echo Dot to them. The only caveat is that you’ll need to route that through a MusicCast-powered hub, which is essentially Yamaha’s answer to Apple AirPlay and Google Cast. It’s not the most intuitive process, but it’s still fun to see in action — especially if it works quite smoothly, as was the case during our demo. For instance, you can tell Alexa to play you a song on your piano, in case you want rather save a few minutes and not browse your music library.

Here’s the other, and arguably main, caveat: Yamaha’s Clavinova CSP models start at $4,000, depending on your piano configuration And if you’re feeling adventurous, the company also has a Grand Piano that works with a similar iPad app and plays itself for $60,000. It just depends on how much you want to impress.

Follow all the latest news from IFA 2017 here!

Google’s Hollywood ‘interventions’ made on-screen coders cooler

The content below is taken from the original (Google’s Hollywood ‘interventions’ made on-screen coders cooler), to continue reading please visit the site. Remember to respect the Author & Copyright.

Google operates a “Computer Science in Media Team” that stages “interventions” in Hollywood to steer film-makers towards realistic and accurate depictions of what it’s like to work in IT.

The company announced the team in 2015 and gave it the job of “making CS more appealing to a wider audience, by dispelling stereotypes and showcasing positive portrayals of underrepresented minorities in tech.” Google felt the effort was worthy because typical depictions of techies on screen used geeky stereotypes and mostly featured men, “leading to particularly girls and underrepresented boys not seeing themselves in the field.” It also wants to have more people to hire: like just about every tech company it struggles to find good people to hire. But the company has noticed that “Five years after the premiere of the original CSI television series, forensic science majors in the U.S. increased by 50%, with an over index of women.”

The efforts of that team have now been detailed in a study [PDF], Cracking the Code: The Prevalence and Nature of Computer Science Depictions in Media.

The study says Google has worked “to intersect the decision-making process that ultimately leads to the on screen representation of computer science” and “Through engagements with show creators and corporate representatives … has attempted to integrate computer science portrayals into TV movies and ongoing series that deviate from stereotypes and showcase diversity.”

The study finds those efforts have mostly worked. While computer science rarely makes it into films and tellie, “The sample of Google influenced content (5.9%, n=61 of 1,039) had a higher percentage of characters engaging in computer science than a matched sample of programming (.5%, n=4 of 883).” While the study finds that a character involved in computer science is still overwhelmingly likely to be a white man, content that Google influenced featured more women than in shows it didn’t engage.

As it happens, El Reg may well already have reported on the Team’s work: back in 2015 we spotted an episode of made-for-kids cartoon The Amazing World of Gumball, a show the study says Google has “advised.” In the episode we reported, a character says the following:

I bypassed the storage controller, tapped directly in to the VNX array head, decrypted the nearline SAS disks, injected the flash drivers into the network’s FabricPath before disabling the IDF, routed incoming traffic through a bunch of offshore proxies, accessed the ESXi server cluster in the prime data center, and disabled the inter-VSAN routing on the layer-3.

The authors of the study think Google still has work to do, because the content it influenced resulted in shows depicting women as “praised for intelligence rather than attractiveness, and were more often rewarded for their CS activities than males”>. But both the stuff Google influenced and shows it didn’t touch “still primarily depict White, male characters engaging in CS, who are often stereotypically attired. The nature of these depictions also reflects CS stereotypes, namely that friendships are primarily with other CS individuals and a lack of children or romantic relationships.”

Overall, the study’s authors declare the effort worthwhile and say Google’s efforts have been well-received in Hollywood, even if stereotypes persist and CS remains something seldom depicted by the entertainment industry.

Shows Google influenced include Miles from Tomorrowland, The Fosters, Silicon Valley, Halt and Catch fire, The Amazing Gumball, The Powerpuff Girls and Ready, Jet, Go. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

UK.gov unveils six areas to pilot full-fat fibre, and London ain’t on the list

The content below is taken from the original (UK.gov unveils six areas to pilot full-fat fibre, and London ain’t on the list), to continue reading please visit the site. Remember to respect the Author & Copyright.

Government has revealed the first six areas in Blighty to trial speeds of 1Gbps in a £10m pilot, as previously revealed by The Register.

The areas include Aberdeen and Aberdeenshire, West Sussex, Coventry and Warwickshire, Bristol, Bath and North East Somerset, West Yorkshire and Greater Manchester.

In August, the Department for Digital, Culture Media and Sport had told industry it would trial a scheme to allow businesses to bid for vouchers worth up to £3,000 for “gigabit-capable” connectivity, and will pay the ongoing line rental costs.

That will most likely to be delivered by fibre but not exclusively so, said the documents.

The model is similar to that of the £100m broadband connection voucher scheme for speeds of more than 30Mbps in 2013-15, which was re-scoped after initially experiencing poor take-up from small businesses.

The latest scheme will be funded via the £200m “full-fibre” investment pot announced in the Spring budget and intended to leverage private sector investment in full-fibre broadband. The remaining £190m is due to be spent by 2020/21.

Exchequer Secretary to the Treasury Andrew Jones MP said: “Full-fibre connections are the gold standard and we are proud to announce today the next step to get Britain better connected.”

Minister of State for Digital Matt Hancock MP said: “We want to see more commercial investment in the gold-standard connectivity that full fibre provides, and these innovative pilots will help create the right environment for this to happen.

“To keep Britain as the digital world leader that it is, we need to have the right infrastructure in place to allow us to keep up with the rapid advances in technology now and in the future.” ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Samsung’s ‘AI-powered’ washer is just trying to save you time

The content below is taken from the original (Samsung’s ‘AI-powered’ washer is just trying to save you time), to continue reading please visit the site. Remember to respect the Author & Copyright.

IFA 2017 isn’t all about smartphones, smartwatches and cute droids. The event is also a chance for companies to showcase their latest innovations for home appliances. Samsung did its part this year with the WW8800M washer, which sports technology called QuickDrive that promises to complete a full load of laundry in just 39 minutes –typically it’s about 70. The company says it’s able to do this without compromising washing performance, energy efficiency and fabric care, something that will matter deeply to people who are extra conscious of how they do their laundry. Oh and it says AI is involved.

Samsung is betting heavily on the "artificial intelligence" powers of its WW8800M to make laundry day less of a chore. The washing machine pairs with an app dubbed Q-rator, which offers modes including Laundry Planner, Laundry Recipe and HomeCard Wizard. The first two features let you do things like pick your desired cycle through the application and adjust the temperature and amount of spins. You can tell the virtual assistant what type of garments you plan to wash too, like if it’s a shirt or a sweater, and then it will suggest the best cycle for it based on the info you type in. HomeCare Wizard, meanwhile, monitors the WW8800M remotely and alerts you if it’s having any issues.

While Samsung’s main goal is to save you time washing your loads, these options could help you take better care of your clothes — all with just a couple of taps on an app. We don’t know if we’d agree with Samsung that the WW8800M is "AI-powered," as the press release suggests, but that doesn’t mean it isn’t smarter than its previous WiFI models.

Unfortunately, Samsung didn’t reveal any pricing or availability details here in Berlin, so we’ll have to wait to judge it by its price.

Follow all the latest news from IFA 2017 here!

OpenStack Developer Mailing List Digest August 26th – September 1st

The content below is taken from the original (OpenStack Developer Mailing List Digest August 26th – September 1st), to continue reading please visit the site. Remember to respect the Author & Copyright.

Succesbot Says!

  • ttx: Pike is released! [21]
  • ttx: Release automation made Pike release process the smoothest ever [22]

 

PTG Planning

  • Monasca (virtual)[1]
  • Vitrage(virtual)[2]
  • General Info & Price Hike[3]
    • Price Goes up to $150 USD
    • Happy Hour from 5-6pm on Tuesday
    • Event Feedback during Lunch on Thursday
  • Queens Goal Tempest Plugin Split [4]
  • Denver City Guide [5]
    • Please add to and peruse
  • ETSI NFV workshop[6][7]

 

Summaries

  • TC Update [8]
  • Placement/Resource Providers Update 34 [9]

 

Updates

  • Pike is official! [10]
  • Outreachy Call for Mentors and Sponsors! [11]
    • More info here[12]
    • Next round runs December 5th to March 5th
  • Libraries published to pypi with YYYY.X.Z versions[13]
    • During Kilo when the neutron vendor decomposition happened, the release version was set to 2015.1.0 for basically all of the networking projects
    • Main issue is that networking-hyperv == 2015.1.0 is currently on Pypi and whenever someone upgrades through pip, it ‘upgrades’ to 2015.1.0 because its considered the latest version
    • Should that version be unpublished?
    • Three options[14]
      • Unpublish- simplest, but goes against policy of pypi never unpublishing
        • +1 from tonyb, made a rough list of others to unpublish that need to be confirmed with PTL’s before passing to infra to unpublish[15]
      • Rename- a bunch of work for downstreams, but cleaner than unpublishing
      • Reversion- Start new versions at 3000 or something, but very hacky and ugly
    • dhellman, ttx, and fungi think that deleting it from pypi is the simplest route though not the typically recommended way of handling things
  • Removing Screen from devstack-RSN[16]
    • Work to make devstack only have a single execution mode- same between automated QA & local- is almost done!
    • Want to merge before PTG
    • Test your devstack plugins against this patch before it gets merged
    • Patch [17]
  • Release Countdown for week R+1 and R+2[18]
    • Still have release trailing deliverables to take care of
    • Need to post their Pike final release before the cycle-trailing release deadline (September 14th)
    • Join #openstack-release if you have questions
    • ttx passes RelMgmgt mantle to smcginnis

 

Pike Retrospectives

  • Nova [19]
  • QA [20]

 

[1] http://bit.ly/2wr8I1X

[2] http://bit.ly/2vShGBr

[3] http://bit.ly/2wrbjcc

[4] http://bit.ly/2vShouy

[5] http://bit.ly/2wqRykF

[6] http://bit.ly/2vSlnqT

[7] http://bit.ly/2wqZYsy

[8] http://bit.ly/2vRODy5

[9] http://bit.ly/2wqQoFO

[10]  http://bit.ly/2vRJi9T

[11] http://bit.ly/2vRZfNv

[12] http://bit.ly/1TZiL2T

[13] http://bit.ly/2vShFxn

[14] http://bit.ly/2wqZYZA

[15] http://bit.ly/2vROo6e

[16] http://bit.ly/2wra7We

[17] http://bit.ly/2vSePsl

[18] http://bit.ly/2wr5dsb

[19] http://bit.ly/2vS2lkn

[20] http://bit.ly/2wqRdPe

[21] http://bit.ly/2vS4qwZ

[22] http://bit.ly/2wqnZ2H

I wrote a Snake game in PowerShell. (Requires Version 5.1)

The content below is taken from the original (I wrote a Snake game in PowerShell. (Requires Version 5.1)), to continue reading please visit the site. Remember to respect the Author & Copyright.

http://bit.ly/2goSJvC

Cummins unveils an electric big rig weeks before Tesla

The content below is taken from the original (Cummins unveils an electric big rig weeks before Tesla), to continue reading please visit the site. Remember to respect the Author & Copyright.

Sorry, Tesla, but someone just stole the thunder from the electric big rig you were planning to unveil this fall. The engine giant Cummins has unveiled a concept semi truck, the AEOS, that runs entirely on the power of an electric motor and a 140kWh battery pack. It’s roughly as powerful as a 12-liter fossil fuel engine and could haul 44,000 pounds of cargo, just without the emissions or rampant fuel costs of a conventional truck. There’s speedy 1-hour charging, and Cummins is even looking at solar panels on the trailer to extend range. It’s a promising offering, although Elon Musk and crew might not lose too much sleep knowing the limitations.

For one thing, range is a sore point. You’re looking at a modest 100-mile range with that 140kWh pack. That’s fine for inter-city deliveries, but it won’t cut the mustard for longer trips. And while there’s talk of extending that distance to 300 miles with extra packs, that would only make it competitive with Tesla’s anticipated 200- to 300-mile range.

And more importantly, this is a concept, not a production vehicle ready to roll off the manufacturing line. There should be a production model in a couple of years, according to CNET, but that gives Tesla plenty of time to get its own EV semi on the road. Not that we’re going to complain about both companies having a fighting chance — more electric big rigs means more competition and fewer polluting trucks.

Via: IndyStar, CNET

Source: Cummins (1), (2)

New – Amazon EC2 Elastic GPUs for Windows

The content below is taken from the original (New – Amazon EC2 Elastic GPUs for Windows), to continue reading please visit the site. Remember to respect the Author & Copyright.

Today we’re excited to announce the general availability of Amazon EC2 Elastic GPUs for Windows. An Elastic GPU is a GPU resource that you can attach to your Amazon Elastic Compute Cloud (EC2) instance to accelerate the graphics performance of your applications. Elastic GPUs come in medium (1GB), large (2GB), xlarge (4GB), and 2xlarge (8GB) sizes and are lower cost alternatives to using GPU instance types like G3 or G2 (for OpenGL 3.3 applications). You can use Elastic GPUs with many instance types allowing you the flexibility to choose the right compute, memory, and storage balance for your application. Today you can provision elastic GPUs in us-east-1 and us-east-2.

Elastic GPUs start at just $0.05 per hour for an eg1.medium. A nickel an hour. If we attach that Elastic GPU to a t2.medium ($0.065/hour) we pay a total of less than 12 cents per hour for an instance with a GPU. Previously, the cheapest graphical workstation (G2/3 class) cost 76 cents per hour. That’s over an 80% reduction in the price for running certain graphical workloads.

When should I use Elastic GPUs?

Elastic GPUs are best suited for applications that require a small or intermittent amount of additional GPU power for graphics acceleration and support OpenGL. Elastic GPUs support up to and including the OpenGL 3.3 API standards with expanded API support coming soon.

Elastic GPUs are not part of the hardware of your instance. Instead they’re attached through an elastic GPU network interface in your subnet which is created when you launch an instance with an Elastic GPU. The image below shows how Elastic GPUs are attached.

Since Elastic GPUs are network attached it’s important to provision an instance with adequate network bandwidth to support your application. It’s also important to make sure your instance security group allows traffic on port 2007.

Any application that can use the OpenGL APIs can take advantage of Elastic GPUs so Blender, Google Earth, SIEMENS SolidEdge, and more could all run with Elastic GPUs. Even Kerbal Space Program!

Ok, now that we know when to use Elastic GPUs and how they work, let’s launch an instance and use one.

Using Elastic GPUs

First, we’ll navigate to the EC2 console and click Launch Instance. Next we’ll select a Windows AMI like: “Microsoft Windows Server 2016 Base”. Then we’ll select an instance type. Then we’ll make sure we select the “Elastic GPU” section and allocate an eg1.medium (1GB) Elastic GPU.

We’ll also include some userdata in the advanced details section. We’ll write a quick PowerShell script to download and install our Elastic GPU software.


<powershell>
Start-Transcript -Path "C:\egpu_install.log" -Append
(new-object net.webclient).DownloadFile('http://bit.ly/2wRAjuj', 'C:\egpu.msi')
Start-Process "msiexec.exe" -Wait -ArgumentList "/i C:\egpu.msi /qn /L*v C:\egpu_msi_install.log"
[Environment]::SetEnvironmentVariable("Path", $env:Path + ";C:\Program Files\Amazon\EC2ElasticGPUs\manager\", [EnvironmentVariableTarget]::Machine)
Restart-Computer -Force
</powershell>

This software sends all OpenGL API calls to the attached Elastic GPU.

Next, we’ll double check to make sure my security group has TCP port 2007 exposed to my VPC so my Elastic GPU can connect to my instance. Finally, we’ll click launch and wait for my instance and Elastic GPU to provision. The best way to do this is to create a separate SG that you can attach to the instance.

You can see an animation of the launch procedure below.

Alternatively we could have launched on the AWS CLI with a quick call like this:

$aws ec2 run-instances --elastic-gpu-specification Type=eg1.2xlarge \
--image-id ami-1a2b3c4d \
--subnet subnet-11223344 \
--instance-type r4.large \
--security-groups "default" "elasticgpu-sg"

then we could have followed the Elastic GPU software installation instructions here.

We can now see our Elastic GPU is humming along and attached by checking out the Elastic GPU status in the taskbar.

We welcome any feedback on the service and you can click on the Feedback link in the bottom left corner of the GPU Status Box to let us know about your experience with Elastic GPUs.

Elastic GPU Demonstration

Ok, so we have our instance provisioned and our Elastic GPU attached. My teammates here at AWS wanted me to talk about the amazingly wonderful 3D applications you can run, but when I learned about Elastic GPUs the first thing that came to mind was Kerbal Space Program (KSP), so I’m going to run a quick test with that. After all, if you can’t launch Jebediah Kerman into space then what was the point of all of that software? I’ve downloaded KSP and added the launch parameter of -force-opengl to make sure we’re using OpenGL to do our rendering. Below you can see my poor attempt at building a spaceship – I used to build better ones. It looks pretty smooth considering we’re going over a network with a lossy remote desktop protocol.

I’d show a picture of the rocket launch but I didn’t even make it off the ground before I experienced a rapid unscheduled disassembly of the rocket. Back to the drawing board for me.

In the mean time I can check my Amazon CloudWatch metrics and see how much GPU memory I used during my brief game.

Partners, Pricing, and Documentation

To continue to build out great experiences for our customers, our 3D software partners like ANSYS and Siemens are looking to take advantage of the OpenGL APIs on Elastic GPUs, and are currently certifying Elastic GPUs for their software. You can learn more about our partnerships here.

You can find information on Elastic GPU pricing here. You can find additional documentation here.

Now, if you’ll excuse me I have some virtual rockets to build.

Randall

Automate.io is a free automation tool and IFTTT alternative

The content below is taken from the original (Automate.io is a free automation tool and IFTTT alternative), to continue reading please visit the site. Remember to respect the Author & Copyright.

Nowadays, everyone is working smarter with various new technologies and automation tools are quite new in the market. Although IFTTT has been available for quite a while now, some other tools like Microsoft Flow, Zapier, etc. were introduced later. If you like automation tools in your daily life, let me introduce you to Automate.io, which is comparatively new.

Automate.io Free Automation Tool

Since the tool is relatively new, it does not have many app integrations to offer, as Microsoft Flow or IFTTT does. However, the developers have been adding new apps frequently.The tool offers a free version – but it has some limitations.

With the free version, you will be able to:

  • Create only five bots. In other words, you can execute up to 5 tasks having the free account.
  • Those five tasks can be executed up to 250 times in every month.
  • You need to wait for 5 minutes to run another task after executing a task.

Moreover, the free account holders would get access to the following apps only:

  • Asana
  • Basecamp
  • Capsule CRM
  • ClearBit
  • Constant Contact
  • Drip
  • Dropbox
  • Eventbrite
  • Facebook
  • Facebook pages
  • Gmail
  • Google Calendar
  • Google Contacts
  • Google Drive
  • Google Sheets
  • Hubspot
  • Intercom
  • MailChimp
  • Slack
  • And a few more.

If you can cope with all these limitations, you can go forward and sign up for an account. The important thing is, you must have a @company.com email ID. That implies @Gmail.com, @Hotmail.com, @Outlook.com, @Yahoo.com, etc. won’t work – and that is a major disadvantage in our opinion.

After signing up, you need to select some apps to get to the next screen, where you can create a new bot. After completing the requirements, head over to the “Bots” tab and click on “Create a Bot.”

Automate.io is a free automation tool and IFTTT alternative

Now, you need to select a Trigger app and an Action app. Click on “Select Trigger app” button > Choose an app > Authorize Automate.io to access your account.

Based on the app, the trigger will be different. Whichever app you choose, you must select a trigger.

Automate.io Free Automation Tool

After this, you can head over to the Action app section and choose an action that you need to execute. Again, you have to select an action from the given list. After selecting everything, make sure you have saved your changes.

Next, you need to turn it on since the default setting doesn’t allow that. To do so, you should find the toggle button.

Automate.io is a free automation tool and IFTTT alternative

After activation, you will get an option to test the bot you just created. In case, you want to delete any bot, head over to “Bots” tab, expand the corresponding drop-down menu, and select “Delete.”

Automate.io is a free automation tool and IFTTT alternative

You can make changes to the Bot as well, by selecting the “Edit” option.

The advantage of this tool is that you can add multiple actions to a single trigger. For example, if you want to save all tweets in a Google Spreadsheet and send them to Slack, you can combine them into one Bot. If you need to do the same in IFTTT or Microsoft Flow, you need to create different bots.

Head over to the automate.io website if you would like to check it out.

Lost Alan Turing letters found in university filing cabinet

The content below is taken from the original (Lost Alan Turing letters found in university filing cabinet), to continue reading please visit the site. Remember to respect the Author & Copyright.

A huge batch of letters penned by British cryptographer Alan Turing has been found at the University of Manchester. Professor Jim Miles was tidying a storeroom when he discovered the correspondence in an old filing cabinet. At first he assumed the orange folder, which had Turing’s name on the front, had been emptied and re-used by another member of staff. But a closer look revealed 148 documents, including a letter sent by GCHQ, a draft version of a BBC radio programme about artificial intelligence, and invitations to lecture at some top universities in America.

Turing worked at the University of Manchester from 1948, first as a Reader in the mathematics department and later as the Deputy Director of the Computing Laboratory. These jobs followed his pivotal work with the Government Code and Cypher School during the Second World War. At Bletchley park, he spearheaded a team of cryptographers that helped the Allies to unravel various Nazi messages, including those protected by the Enigma code. The newly discovered documents date from early 1949 until his death in June 1954. At this time, Turing’s work on Enigma was still a secret, which is why it’s rarely mentioned in the correspondence.

None of the letters contain previously unknown information about Turing. They do provide new detail, however, about his life at Manchester and how he worked at the University. They also shed light on his personality — responding to a conference invitation in the US, he said boldly: "I would not like the journey, and I detest America." The documents also reference his work on morphogenesis, the study of biological life and why it takes a particular form, AI, computing and mathematics. "I was astonished such a thing had remained hidden out of sight for so long," Miles said.

All of the letters have now been sorted, catalogued and stored by James Peters at the University’s library. "This is a truly unique find," Miles said. "Archive material relating to Turing is extremely scarce, so having some of his academic correspondence is a welcome and important addition to our collection." You can now search for and view all 148 documents online.

None of the correspondence references his personal life. Turing was arrested in 1952 for homosexual acts and chose chemical castration over time in prison. In 1954, he died through cyanide poisoning, which an inquest later determined as suicide. The British government officially apologised for his treatment in 2009, before a posthumous pardon was granted in 2013. Last October, the UK government introduced the "Alan Turing Law," awarding posthumous pardons to thousands of gay and bisexual men previously convicted for consensual same-sex relationships.

Via: International Business Times

Source: The University of Manchester

Watch 1,069 Dancing Robots Claim New World Record

The content below is taken from the original (Watch 1,069 Dancing Robots Claim New World Record), to continue reading please visit the site. Remember to respect the Author & Copyright.

A Chinese toy maker has broken the Guinness World Record for “most robots dancing simultaneously.” Yes—While you toil away at your desk for eight hours a day, five days a week, there are […]

The post Watch 1,069 Dancing Robots Claim New World Record appeared first on Geek.com.

Raspberry Pi HAT spins up RFID and NFC

The content below is taken from the original (Raspberry Pi HAT spins up RFID and NFC), to continue reading please visit the site. Remember to respect the Author & Copyright.

Eccel’s rugged “Raspberry Pi-B1” Raspberry Pi HAT add-on provides an RFID B1 module for enabling short-range RFID or NFC communications at 13.56MHz. Eccel Technology, which is also known as IB Technology, has launched a “Raspberry Pi Hat RFID/NFC Board” that is also known as the “Raspberry Pi-B1.” The HAT compatible add-on board has gone on […]

R-Comp release !DualHead

The content below is taken from the original (R-Comp release !DualHead), to continue reading please visit the site. Remember to respect the Author & Copyright.

If you own a Titanium based machine you may have noticed that it has 2 video output ports. If you plug a monitor into the right port (as you look at the machine from the back), you will get the chemical details of the element Titanium on your second screen. Interesting but not very practical….

Now R-Comp have released !DualHead which allows their Titanium based TiMachine to display RISC OS across two screens (heads). In this article, we will get it up and running with a later look at how well it works. Let us see if two Heads are better than one…

The application is a free download from the R-Comp website (you will need your username and password to access it). It consists of some updates for !Boot, a very helpful !ReadMe, and the actual !DualHead application.

I read the !ReadMe, updated !Boot and rebooted my machine. Nothing changes until you run the !DualBoot software and press space. If anything goes wrong the software is well-designed to revert back to the default single display.

You now have one RISC OS display spilt across 2 screens (with a really long iconbar across the bottom). Windows can also be split across screen as you can see from the alert message. This can take used to along with alerts and dialog boxes popping up on the screen you were not expecting.

As you can see the software is very easy to setup. Next time we will delve into how well it works….

No comments in forum