A city covered in trees will fight air pollution in China

The content below is taken from the original (A city covered in trees will fight air pollution in China), to continue reading please visit the site. Remember to respect the Author & Copyright.

It’s easy to find buildings laced with greenery in order to reduce their CO2 footprints. But what about an entire city? That’s on its way. Construction has started on Liuzhou Forest City, a 30,000-person urban development where every building will be covered in pollution-reducing plants (over 1 million of them, in fact). They’ll also rely on geothermal energy for air conditioning and pack solar panels to collect their own energy. Logically, the transportation network will be green as well. It’ll revolve around electric cars and a central rail line that links the experimental space to the city of Liuzhou.

If all goes well, the project will absorb nearly 10,000 tons of CO2 (and 57 tons of other pollutants) on a yearly basis, and pump out 900 tons of oxygen in the process. This isn’t some far-off dream, either, as Boeri’s firm expects to complete the Forest City by 2020.

Just don’t count on these eco-friendly cities becoming ubiquitous. Even if municipalities are fine with retrofitting existing buildings, they’ll still need ideal climates to support all that flora. There’s a good reason why Boeri’s team is setting up in southern China — it’s easy to maintain plant life in an area which rarely deals with freezing temperatures. Nonetheless, this hints at a future where entire population centers fight air pollution and leave a relatively tiny mark on the environment.

Via: Designboom

Source: Stefano Boeri Architetti

Scratch 2.0: all-new features for your Raspberry Pi

The content below is taken from the original (Scratch 2.0: all-new features for your Raspberry Pi), to continue reading please visit the site. Remember to respect the Author & Copyright.

We’re very excited to announce that Scratch 2.0 is now available as an offline app for the Raspberry Pi! This new version of Scratch allows you to control the Pi’s GPIO (General Purpose Input and Output) pins, and offers a host of other exciting new features.

Offline accessibility

The most recent update to Raspbian includes the app, which makes Scratch 2.0 available offline on the Raspberry Pi. This is great news for clubs and classrooms, where children can now use Raspberry Pis instead of connected laptops or desktops to explore block-based programming and physical computing.

Controlling GPIO with Scratch 2.0

As with Scratch 1.4, Scratch 2.0 on the Raspberry Pi allows you to create code to control and respond to components connected to the Pi’s GPIO pins. This means that your Scratch projects can light LEDs, sound buzzers and use input from buttons and a range of sensors to control the behaviour of sprites. Interacting with GPIO pins in Scratch 2.0 is easier than ever before, as text-based broadcast instructions have been replaced with custom blocks for setting pin output and getting current pin state.

Scratch 2.0 GPIO blocks

To add GPIO functionality, first click ‘More Blocks’ and then ‘Add an Extension’. You should then select the ‘Pi GPIO’ extension option and click OK.

Scratch 2.0 GPIO extension

In the ‘More Blocks’ section you should now see the additional blocks for controlling and responding to your Pi GPIO pins. To give an example, the entire code for repeatedly flashing an LED connected to GPIO pin 2.0 is now:

Flashing an LED with Scratch 2.0

To react to a button connected to GPIO pin 2.0, simply set the pin as input, and use the ‘gpio (x) is high?’ block to check the button’s state. In the example below, the Scratch cat will say “Pressed” only when the button is being held down.

Responding to a button press on Scractch 2.0

Cloning sprites

Scratch 2.0 also offers some additional features and improvements over Scratch 1.4. One of the main new features of Scratch 2.0 is the ability to create clones of sprites. Clones are instances of a particular sprite that inherit all of the scripts of the main sprite.

The scripts below show how cloned sprites are used — in this case to allow the Scratch cat to throw a clone of an apple sprite whenever the space key is pressed. Each apple sprite clone then follows its ‘when i start as clone’ script.

Cloning sprites with Scratch 2.0

The cloning functionality avoids the need to create multiple copies of a sprite, for example multiple enemies in a game or multiple snowflakes in an animation.

Custom blocks

Scratch 2.0 also allows the creation of custom blocks, allowing code to be encapsulated and used (possibly multiple times) in a project. The code below shows a simple custom block called ‘jump’, which is used to make a sprite jump whenever it is clicked.

Custom 'jump' block on Scratch 2.0

These custom blocks can also optionally include parameters, allowing further generalisation and reuse of code blocks. Here’s another example of a custom block that draws a shape. This time, however, the custom block includes parameters for specifying the number of sides of the shape, as well as the length of each side.

Custom shape-drawing block with Scratch 2.0

The custom block can now be used with different numbers provided, allowing lots of different shapes to be drawn.

Drawing shapes with Scratch 2.0

Peripheral interaction

Another feature of Scratch 2.0 is the addition of code blocks to allow easy interaction with a webcam or a microphone. This opens up a whole new world of possibilities, and for some examples of projects that make use of this new functionality see Clap-O-Meter which uses the microphone to control a noise level meter, and a Keepie Uppies game that uses video motion to control a football. You can use the Raspberry Pi or USB cameras to detect motion in your Scratch 2.0 projects.

Other new features include a vector image editor and a sound editor, as well as lots of new sprites, costumes and backdrops.

Update your Raspberry Pi for Scratch 2.0

Scratch 2.0 is available in the latest Raspbian release, under the ‘Programming’ menu. We’ve put together a guide for getting started with Scratch 2.0 on the Raspberry Pi online (note that GPIO functionality is only available via the desktop version). You can also try out Scratch 2.0 on the Pi by having a go at a project from the Code Club projects site.

As always, we love to see the projects you create using the Raspberry Pi. Once you’ve upgraded to Scratch 2.0, tell us about your projects via Twitter, Instagram and Facebook, or by leaving us a comment below.

The post Scratch 2.0: all-new features for your Raspberry Pi appeared first on Raspberry Pi.

The No-Frills Way to Watermark Memos and Track Leaks

The content below is taken from the original (The No-Frills Way to Watermark Memos and Track Leaks), to continue reading please visit the site. Remember to respect the Author & Copyright.

Let’s say you need to send a private message to a group of people, but you’re afraid one of them will leak the message elsewhere, and you won’t know who. Fast Forward Labs has a rough-and-ready solution that will expose anyone who publicly copies and pastes your message, without letting them know they’ve been caught.

Read more…

Wanna write a Cloudflare app? No? Would $100m change your mind?

The content below is taken from the original (Wanna write a Cloudflare app? No? Would $100m change your mind?), to continue reading please visit the site. Remember to respect the Author & Copyright.

Internet traffic wrangler Cloudflare is opening up its massive global network to third-party developers.

The network capacity and services provider says that its Cloudflare Apps Platform will let developers write code for web apps that Cloudflare customers can then purchase and embed on their sites.

Globally, Cloudflare said that it operates 115 data centers around the world and provides services for 6 million sites. The company hopes that it can use that customer base to support an early crop of developers, who will in turn create apps that will be used as selling points to bring even more customers to Cloudflare.

The program will include the app platform APIs and market as well as payment services for developers and a finance site that will connect the developers with VCs to back their start-ups.

To sweeten the pot, Cloudflare says it has brought in three of its own early investors to sponsor a $100m developer fund that will go toward companies that write code for the Cloudflare network.

The aim, said CEO Matthew Prince, is to open up Cloudflare’s network for third-party developers in the way that companies like Salesforce (with Force.com) and Apple (with the App Store) have done with their platforms.

“I think this has always been in the direction of what we have been doing,” Prince told The Register.

“The core asset that Cloudflare has is a global network and enough scale to run it profitably, but we don’t have a monopoly on ideas to run across that network.”

Prince said that while the early apps will likely be lightweight code to do things like embed share buttons or analyze traffic, eventually more sophisticated apps could make use of Cloudflare to perform compute tasks on the network edge rather than locally or in a data center.

“Over time what you will see is applications being developed to have a much broader scope,” Prince said.

“The future is things that make sense to run close to the client that you don’t want to run in a data center, but don’t make sense to run on the device itself either.”

In the meantime, Cloudflare has enlisted the likes of Pinterest, Oracle, and Zendesk, which have developed widgets for their respective services that customers will be able to drop into their own sites. The widgets are among 50 Cloudflare is using to launch the service. ®

New Turbonomic Integration with Cisco Tetration Analytics to Enable Network-Aware Automated Placement of Workloads

The content below is taken from the original (New Turbonomic Integration with Cisco Tetration Analytics to Enable Network-Aware Automated Placement of Workloads), to continue reading please visit the site. Remember to respect the Author & Copyright.

Turbonomic Logo 

Today at Cisco
Live
, Cisco’s annual IT and communications conference, Turbonomic
announced a new integration roadmap with Cisco Tetration Analytics
to help customers activate their hybrid cloud by leveraging deep
analytics intelligence to automatically place workloads, whether
on-premises or in the cloud.

As customers turn to hybrid cloud to meet the demands of a digital
business, their applications are becoming more distributed as they adopt
new architectures. This is causing the network to play an even larger
role in application quality of service. The complexity of these
environments requires automated self-managing software that works in
real-time. Turbonomic helps organizations make the right resource
allocation decisions in real-time, whether on-premises or in the cloud.
Cisco Tetration provides organizations with complete visibility across
everything in the data center in real-time – every packet, every flow,
and every speed.

By integrating with Tetration’s powerful telemetry and analytics, the
Turbonomic platform will be able to leverage flow patterns, which
automatically reduces network contention challenges. This enables
placement decisions for workloads that are network aware, which improves
performance. Additional use cases around application dependency mapping
and cloud migration are also on the roadmap, which will simplify a
customer’s ability to activate their hybrid cloud and assure
performance, lower cost and maintain continuous compliance.

“We are very excited to be working with Cisco Tetration to deliver
customers with an even more powerful platform to activate their hybrid
cloud journey,” said Shmuel Kliger, President and Founder of Turbonomic.
“The combination of making the right resource allocation decisions in
real-time, while leveraging Tetration’s real-time understanding of the
communication flows between applications, will give customers the
ability to automatically determine the best placement for workloads
whether on-premises or in the cloud – which is one of the more
challenging tasks that organizations face as they journey to the cloud.”

To learn more about Turbonomic visit booth #4633 at Cisco Live.

Nvidia, ZF and Hella will team to ensure self-driving cars meet safety standards

The content below is taken from the original (Nvidia, ZF and Hella will team to ensure self-driving cars meet safety standards), to continue reading please visit the site. Remember to respect the Author & Copyright.

Auto industry suppliers ZF and Hella are welcoming a third strategic partner to their effort to bring self-driving systems to market for OEM clients. Nvidia is joining the two to help incorporate its own in-car AI technology in a self-driving system that meets the New Car Assessment Program (NCAP) certification for passenger vehicles, and to also address safety requirements for commercial and off-road vehicles.

Hella builds camera systems, radar systems and other related software systems for ADAS-related tech, and ZF is one of the leading tier 1 suppliers in the car industry. Nvidia’s tech will be used to help bring systems to market that, starting with Level 3 autonomy (drivers can give over control but must be ready and able to resume manual driving at any moment) are properly certified according to the long-standing industry NCAP standards, which have governed consumer vehicles since their introduction in the 1970s.

“We’re now delivering across the whole spectrum of autonomous driving features,” explained Nvidia Senior Director of Automotive Danny Shapiro on a call. “What we’re doing is working with both ZF and Hella to bring NCAP safety certification to vehicles along with Level 3 type of [autonomous] driving systems, where in some cases you can do hands-free or feet-free driving.”

Shapiro said that ultimately, the crucial benefit to come out of this arrangement will be “a single platform from Nvidia can be connected to cameras to deliver NCAP certification and hands-free driving in certain circumstances.”

He added that Nvidia is “really excited that AI is having a transformative effect on the automotive industry in general,” including in NCAP-certified ADAS systems, since it means Nvidia’s AI computers will be in more cars, where they can then perform a range of functions over and above autonomous driving.

Nvidia’s tech could be use to potentially alert a driver to bring the car in for preventative or necessary maintenance, for instance. And there’s “no possible way for humans to sift through all this” he added, but a deep learning system based on Drive PX can be trained to sift massive amounts of data in real time.

“We’ll be playing a very active role in cybersecurity, we believe,” Shapiro said, adding that Nvidia can also “detect traffic patterns, detect congestion, optimize traffic flow and reduce congestion, work with infrastructure” once deployed at scale in more consumer vehicles.

It’s now easier to get Purism’s security-focused laptops

The content below is taken from the original (It’s now easier to get Purism’s security-focused laptops), to continue reading please visit the site. Remember to respect the Author & Copyright.

Purism is nowhere near as well-known as other PC makers, but you may want to keep it on your radar if you’re becoming increasingly concerned about security and privacy. The company, which only used to sell made-to-order machines, has just announced the general availability of its security-focused Librem 13 and Librem 15 laptops. That means you don’t have to wait months in a waiting list just to be able to buy one — you’ll now get your computer within "a few weeks after purchase."

The company says it works with hardware manufacturers to make sure its components can’t be used to infiltrate your system. For instance, its laptops have a kill switch that turns off their mic and camera, so you can make sure nobody’s spying on you through your webcam, which unfortunately can happen to anyone. Another kill switch disables their WiFi and Bluetooth in an instant to prevent unauthorized connection to your computer in public. Librem 13, 15 and the brand’s other computers also run the company’s own PureOS that’s a derivative of Debian GNU/Linux.

Purism might have decided it’s high time to make their computers more accessible now that people are becoming more conscious about the security of their devices. It specifically mentioned the WannaCry ransomware attacks in its announcement post as one of the more recent large-scale security scares. By eliminating the need to wait for months, the buying process becomes much less intimidating for ordinary people or non-security researchers. Take note that the Librem laptops aren’t cheap, though: based on what we’ve seen from the manufacturer’s website, the 13-inch laptop will set you back at least $1,699, while the cheapest 15-inch configuration costs $1,999.

Source: Purism

Managing updates for your Azure VM

The content below is taken from the original (Managing updates for your Azure VM), to continue reading please visit the site. Remember to respect the Author & Copyright.

In this blog post, I will talk about how to use Update Management solution to manage updates for your Azure VMs. Right from within your Azure VM you can quickly assess the status of available updates, initiate the process of installing required updates, and review deployment results to verify that updates were applied successfully to the VM.

This feature is currently in private preview. If you’re interested in giving it a try, please sign up!

Enabling Update Management

From your VM, you can select “Manage Updates” on the virtual machines blade, under Automation + Control. After selecting it, validation is performed to determine if the Update Management solution is enabled for this VM. If it is not enabled, you will have the option to enable the solution.

The solution enablement process can take up to 15 minutes, and during this time you should not close the browser window. Once the solution is enabled and log data starts to flow to the workspace, it can take more than 30 minutes for data to be available for analysis in the dashboard described in the next section. We expect this timing to significantly improve in the future.

Review update assessment

From the Manage Updates dashboard, you can review the update compliance state of the VM from the Missing updates by severity tile, which displays a count and graphical representation of the number of updates missing on the VM. The table below shows how the tile categorizes the updates missing by update classification.

To create an update deployment and bring the VM into compliance, you configure a deployment that follows your release schedule and service window. This entails what update types to include in the deployment, such as only critical or security updates, or if you want to exclude certain updates.

Create a new Update Deployment for the VM by clicking the “Schedule deployment for this VM” button at the top of the blade and specify the required values. 

After you have completed configuring the schedule, click the “OK” button and you return to the status dashboard. You will notice that the Scheduled table shows the deployment schedule you just created.

View update deployment state

When the scheduled deployment executes, you see the status appear for that deployment under the Completed and in-progress table. Double-clicking the completed update deployment takes you to the detailed deployment status page.


To review all detailed activities performed as part of the update deployment, select “All Logs and Output tiles”. This will show the job stream of the runbook responsible for managing the update deployment on the target VM.

OS support

  • Windows: Windows 2012 and above
  • Linux: RedHat Linux 6 & 7, Ubuntu Server 12.04 LTS, 14.04 LTS, 15.10, and 16.04

New to OMS Update Management

If you are new to OMS Update Management, you can view the current capabilities which include Update Insights across Windows and Linux, and the ability to deploy updates, as well as documentation.

In future posts, I’ll talk about how to manage updates for multiple VMs in your subscription and how to orchestrate the update deployments including running pre/post steps, sequencing, and much more!

Researchers can now desalinate seawater with the power of the Sun

The content below is taken from the original (Researchers can now desalinate seawater with the power of the Sun), to continue reading please visit the site. Remember to respect the Author & Copyright.

One of the oldest means of extracting potable liquid from seawater involves distillation, basically boiling the water into steam and then cooling the purified vapor in condensation tubes. Problem is, this method is incredibly power intensive with nearly half of the input energy going towards just boiling the water. But, a team of researchers from Rice University have developed a new technique that not only drastically reduces the amount of energy needed but can decouple the process from the power grid altogether.

The research was carried out at the federally funded Center for Nanotechnology Enabled Water Treatment (NEWT) at Rice University. Since its formation in 2015, NEWT has worked to develop a technology called "nanophotonics-enabled solar membrane distillation", or NESMD. In this method, flows of hot and cold water are separated by a thin membrane. Water vapor is drawn across that membrane from hot side to cold, straining out the salt. This uses much less energy than distillation since the water only needs to be hot, not boiled.

To further improve the system’s efficiency, researchers at NEWT combined commercially available membranes with nanoparticles that convert light into heat. Doing so means that the membrane itself heats up, so you don’t need a steady supply of hot water, just sunlight.

And since you don’t need a bunch of energy to heat water, the power requirements drop to little more than running a pump to help push the fluid through the process. As such, the entire modular system can run on a couple of solar panels.

During their tests, the research team found that like molten salt power arrays, their device’s efficiency multiplied if the sunlight was concentrated. "The intensity got up 17.5 kilowatts per meter squared when a lens was used to concentrate sunlight by 25 times," Rice University researcher Qilin Li, said in a statement, "and the water production increased to about 6 liters per meter squared per hour."

And since the system is modular, the thinking is that places like remote communities, offshore oil rigs and disaster relief sites would be able to figure out their hourly water consumption rates and install exactly the desalination capacity necessary. This same technology could just as easily replace the current membrane distillation technology at more than 18,000 water purification plants worldwide.

"Direct solar desalination could be a game changer for some of the estimated 1 billion people who lack access to clean drinking water," . "This off-grid technology is capable of providing sufficient clean water for family use in a compact footprint, and it can be scaled up to provide water for larger communities."

Source: Rice University

2017 hacker board survey: Raspberry Pi still rules, but x86 SBCs make gains

The content below is taken from the original (2017 hacker board survey: Raspberry Pi still rules, but x86 SBCs make gains), to continue reading please visit the site. Remember to respect the Author & Copyright.

[Updated: June 21] — The results are in: The Raspberry Pi 3 is the most desired maker SBC by a 4-to-1 margin. In other trends: x86 SBCs and Linux/Arduino hybrids get a boost. More than ever, it’s a Raspberry Pi world, and other Linux hacker boards are just living in it. Our 2017 hacker board […]

Migrating Modern Public Folders to Exchange Online (or Elsewhere)

The content below is taken from the original (Migrating Modern Public Folders to Exchange Online (or Elsewhere)), to continue reading please visit the site. Remember to respect the Author & Copyright.

Public Folders Exchange Online

Public Folders Exchange Online

Modern Public Folders

Microsoft introduced “modern” public folders in Exchange 2013. The only modern aspect of the implementation is storing public folders in mailboxes, where they can take advantage of Exchange’s Native Data Protection functionality instead of depending on the creaky replication mechanism used since the introduction of public folders in Exchange 4.0 in 1996.

Initially, Microsoft supported the migration of old-style public folders from on-premises servers to Exchange Online. For whatever reason, customers who went through the torturous process of migrating old-style public folders to modern public folders in Exchange 2013 hit a brick wall. They could not move those modern public folders to Exchange Online, even though the folders were the same type on both side of the cloud divide.

Onto the Cloud

Microsoft eventually solved the problem and introduced the ability to move modern public folders from Exchange 2013 and 2016 servers to Exchange Online in March 2017. All you need is to make sure that you run the latest cumulative updates on your on-premises servers and follow Microsoft’s directions to flow data to the cloud. The process is manual and tiresome and involves manipulation of CSV files, but it does work.

It is reasonable to ask why it took Microsoft so long to deliver this functionality. I think it comes down to priorities and available development resources. When Microsoft delivered the public folder migration tools for Exchange 2013, the goal was to move customers off old-style public folders. Later, when Exchange Online introduced support for modern public folders, the tools could accommodate migration from old-style public folders, which suited Microsoft’s strategic direction at the time.

On-Premises or Cloud

Customers face a fundamental decision to move to the cloud or stay on-premises. Relatively few customers would incur the cost of upgrading to Exchange 2016 and then decide to move to Exchange Online. That doesn’t make sense. Once customers take a decision about the platform to use, it is reasonable to assume that they want to use modern public folders on their chosen platform.

Microsoft knows that those who move to Office 365 migrate mailboxes first. Public folders are typically handled last in a migration. This is not a big problem because cloud mailboxes can access on-premises public folders. Overall, it is unsurprising that Microsoft decided to delay introducing migration tools to move modern public folders from on-premises to the cloud.

Conspiracy theorists will say that Microsoft’s real goal is to convince customers to move public folders to SharePoint or another repository and therefore they did not want to support migration of modern public folders to the cloud. Although a nice conspiracy always drives debate on social media, there’s really nothing in it. The idea gained currency in the Exchange 2007/SharePoint 2007 era but was never more than idle chatter. No migration tools were produced and no serious effort was put into figuring out all the complexities of moving all the various kinds of data found in public folders to SharePoint entities.

Bringing Old Stuff to Modern Collaboration

Public folders have served as a collaboration platform for Exchange for over twenty years. However, much better collaboration technology exists inside Office 365. Although no one has yet proposed migrating public folders to Microsoft Teams (no doubt this idea will surface in time), ISVs offer tools that can move public folders to shared mailboxes or Office 365 Groups – or even to modern public folders running on Exchange Online.

Three examples that can handle modern public folders are BitTitan MigrationWiz, Binary Tree E2E Complete, CodeTwo Office 365 Migration, and QUADROtech PublicFolderShuttle. These products can move the data found in public folders to various destinations in the cloud including shared mailboxes (which are free in terms of licensing). PublicFolderShuttle also includes Office 365 Groups as a destination, which is a good choice if you use public folders for collaboration rather than a convenient dumping ground for shared email. If you do, a shared mailbox is a better option.

Interestingly, before doing any migration, PublicFolderShuttle analyzes the public folder hierarchy and data to determine the best destination for different folders. Experience gained from analyzing many public folder hierarchies reveals that relatively few public folders (probably 10% or less) are good candidates to be transformed into Office 365 Groups. Suitable folders are highly active, mail-enabled, and hold both email items and documents – or just documents.

Other public folders are better moved to shared mailboxes, especially those that only hold email items. The remainder of public folders that contain valuable data could be moved to modern public folders within Exchange Online, even if those folders then become archives.

Another interesting fact gleaned by analyzing public folder hierarchies is that it is common to discover that a large percentage of public folders have not been used recently. These folders are candidates to be pruned and discarded rather than being moved to the cloud.

The cost of commercial solutions is outweighed by the automation and flexibility of the toolsets. Of course, you can do-it-yourself by exporting public folder content to PSTs and importing the data into whatever target you think is reasonable. Such an approach is only useful when you have only a small number of folders to move.

Sponsored

The Cockroaches Persist

I have often referred to public folders as the “cockroaches of Exchange” because of their ability to survive for so long despite so little tender loving care from Microsoft. The nice thing is that we now have some real opportunities to move public folders to modern collaboration platform. Not Teams (yet, if ever), but definitely Office 365 Groups.

Follow Tony on Twitter @12Knocksinna.

Want to know more about how to manage Office 365? Find what you need to know in “Office 365 for IT Pros”, the most comprehensive eBook covering all aspects of Office 365. Available in PDF and EPUB formats (suitable for iBooks) or for Amazon Kindle.

The post Migrating Modern Public Folders to Exchange Online (or Elsewhere) appeared first on Petri.

Google can turn an ordinary PC into a deep learning machine

The content below is taken from the original (Google can turn an ordinary PC into a deep learning machine), to continue reading please visit the site. Remember to respect the Author & Copyright.

Time is one of the biggest obstacles to the adoption of deep learning. It can take days to train one of these systems even if you have massive computing power at your disposal — on more modest hardware, it can take weeks. Google might just fix that. It’s releasing an open source tool, Tensor2Tensor, that can quickly train deep learning systems using TensorFlow. In the case of its best training model, you can achieve previously cutting-edge results in one day using a single GPU. In other words, a relatively ordinary PC can achieve results that previously required supercomputer-level machinery.

It’s also very flexible: there’s a standard, modular interface that lets you use virtually any training model, data set or parameters. You don’t need to replace everything just to change one component. And since it’s open source, you could easily see the community share its own models to help you get started.

It’s doubtful you’ll use Tensor2Tensor at home, of course, since you still have to be steeped in deep learning know-how to make it work. However, this could open the door to researchers that don’t have the luxury of a many-GPU setup to train their deep learning systems in a reasonable amount of time. This should help them finish projects faster, or give them time to produce higher-quality results.

Source: Google Research Blog

Microsoft improves Office’s hands-free typing with Dictate

The content below is taken from the original (Microsoft improves Office’s hands-free typing with Dictate), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft has released a new app called Dictate. It’s an add-in for Word, Outlook and Powerpoint and uses Cortana’s speech-recognition technology to let you speak what you want to type.

The company is obviously not the first to work on dictation technology. Nuance’s Dragon software has been around for awhile and is available for both desktops and mobile devices. And, last year, Google added more features to its voice typing option in Docs.

Office has already supported voice-to-text typing, but Dictate brings along some new features. It supports more than 20 languages and has a number of commands that let you edit as you go. Simple statements like "new line," "delete" and "stop dictation" let you manipulate the cursor and correct the text with your voice. Punctuation is also easily managed with voice control.

Another feature offered is real time translation. Just adjust some of the settings and Dictate will type a translation of what you speak. You could speak in Spanish and type in French, for example, and the 20 languages supported for dictation can be translated into over 60.

Right now, Dictate is available for 32- and 64-bit Office and Windows 8.1 is a minimum requirement. The download is free, but because it’s a Microsoft Garage project, it’s not clear what the future holds for the app.

Source: Microsoft

Introducing the new Cloud Academy mobile app

The content below is taken from the original (Introducing the new Cloud Academy mobile app), to continue reading please visit the site. Remember to respect the Author & Copyright.

Mobile app iOS and Android

Following on our recent updates to the Cloud Academy web interface, today, we’re happy to announce that we’ve implemented many of these same changes on our mobile app for both iOS and Android.

New features of the Cloud Academy mobile app

UI

You may have already noticed that our mobile interface is now aligned with the website. This allows you to easily navigate to your areas of interest faster and more intuitively.
The app opens to our content library, and you can easily navigate between any of the four main areas at the bottom of the screen: Library, Explore by Topic, Dashboard, and Smart Sessions.
 

Content Library

The full Cloud Academy content library is now available on our mobile app, making it easier for you to discover the video courses, learning paths, quizzes, and hands-on labs available for AWS, Microsoft Azure, Google Cloud Platform, DevOps, Containers, and more.
Our curated content stripes help you explore content by collection, such as Cloud Fundamentals, Cloud Ecosystem, and by platform.
 
    

Search

Or, you can simply search for your topic of interest.
 
    

Explore by Topic

Many Cloud Academy users choose our learning paths because they take the guesswork out of figuring out which course to take next. Explore the variety of learning paths organized by cloud computing platform (AWS, Azure, Google Cloud Platform), for general cloud computing topics (Getting Started with Cloud Migration for Business, Docker & Container Technologies, Careers in Cloud Computing, Ansible, Serverless, and more), and for DevOps.
 
  

Smart Sessions

Learn on the go with our Smart Sessions: Choose the time you want to study or practice – 10 minutes, 20 minutes, 30 minutes, or up to an hour – and we’ll serve up the course material based on your availability.
 
  
 

User Dashboard

Our intuitive dashboard makes it easy to track your progress, pick up where you left off, discover new and trending content, and explore the content recommended just for you.
 
  
 

  • The “Today” tab is all about where to go next. Pick up with activities already in progress, explore new content, or explore the content recommended for you based on your preferences and current activities.
  • The “Progress” tab shows an overview of where you are in Cloud Academy by identifying your pending and completed pieces of content, your strengths and weaknesses based on quiz and course performance, and more.
  • Easily reference any of your downloaded content in the “Downloaded” tab to study or practice offline.

 

Explore for free

Starting this week, we’ve unlocked the Cloud Academy library (for both web and mobile) so that users can browse content at a more detailed level without a Cloud Academy account.
 

 

Offline feature

With this new feature, you can download courses and quizzes to learn and practice anywhere, anytime.
 
  
 
To ensure the highest level of functionality and compatibility, please make sure that your devices are updated to the most current versions for iOS and Android.
 
Enjoy and download our new Cloud Academy app now!
Get it on Google Play

Is your product “Powered by Raspberry Pi”?

The content below is taken from the original (Is your product “Powered by Raspberry Pi”?), to continue reading please visit the site. Remember to respect the Author & Copyright.

One of the most exciting things for us about the growth of the Raspberry Pi community has been the number of companies that have grown up around the platform, and who have chosen to embed our products into their own. While many of these design-ins have been “silent”, a number of people have asked us for a standardised way to indicate that a product contains a Raspberry Pi or a Raspberry Pi Compute Module.

Powered by Raspberry Pi Logo

At the end of last year, we introduced a “Powered by Raspberry Pi” logo to meet this need. It is now included in our trademark rules and brand guidelines, which you can find on our website. Below we’re showing an early example of a “Powered by Raspberry Pi”-branded device, the KUNBUS Revolution Pi industrial PC. It has already made it onto the market, and we think it will inspire you to include our logo on the packaging of your own product.

KUNBUS RevPi
Powered by Raspberry Pi logo on RevPi

Using the “Powered by Raspberry Pi” brand

Adding the “Powered by Raspberry Pi” logo to your packaging design is a great way to remind your customers that a portion of the sale price of your product goes to the Raspberry Pi Foundation and supports our educational work.

As with all things Raspberry Pi, our rules for using this brand are fairly straightforward: the only thing you need to do is to fill out this simple application form. Once you have submitted it, we will review your details and get back to you as soon as possible.

When we approve your application, we will require that you use one of the official “Powered by Raspberry Pi” logos and that you ensure it is at least 30 mm wide. We are more than happy to help you if you have any design queries related to this – just contact us at [email protected]

If you’re looking to adorn your home projects, school books, or kit with Raspberry Pi branding, check out our swag store for stickers, pins, and more.

The post Is your product “Powered by Raspberry Pi”? appeared first on Raspberry Pi.

Open19: The Vendor-Friendly Open Source Data Center Project

The content below is taken from the original (Open19: The Vendor-Friendly Open Source Data Center Project), to continue reading please visit the site. Remember to respect the Author & Copyright.

In case you missed it, LinkedIn last month teamed up with GE, Hewlett Packard Enterprise, and a host of other companies serving the data center market to launch a foundation to govern its open source data center technology effort. The Open19 Foundation now administers the Open19 Project, which in many ways is similar to the Open Compute Project, started by Facebook, but also stands distinctly apart thanks to several key differences.

The most prominent point of contrast is Open19’s target audience: data center operators who are smaller than the hyper-scale cloud platforms operated by Facebook, Microsoft, Apple, and Google, some of OCP’s biggest data center-operator members. Another big difference is Open19’s big focus on edge compute in addition to core data center hardware.

There are other differences, but one that is especially telling about the nature of Open19 is the way its founders have chosen to treat intellectual property of the participating companies. Unlike OCP, which requires any company that wants to have a server or another piece of gear recognized as OCP-compliant to open source the entire thing, Open19 structured its licensing framework in a way that lets companies protect their IP and still participate. If HPE or one of the other participating hardware vendors wants to adopt Open19 standards for a server, for example, it doesn’t have to part with its rights to the technology inside that server for the foundation to recognize it as Open19-compliant.

“A lot of people are reluctant to be in an environment where they’re always required to put their IP out,” Yuval Bachar, a top infrastructure engineer at LinkedIn who is spearheading Open19, said in an interview with Data Center Knowledge. “We’re creating an environment where you’re not required [to open source IP] unless you participate actively and contribute to the project.”

See also: LinkedIn’s Data Center Standard Aims to Do What OCP Hasn’t

LinkedIn owns all the current Open19 IP, which includes a “cage” that goes inside a standard data center rack, four standard server form factors that slide into the cage, a power shelf, which is essentially a single power supply for all the servers in the cage, and a network switch. The company is planning to contribute all of the above to the foundation, Bachar said, but it doesn’t expect other members to do the same. It might also contribute the technology inside the servers, although server innards aren’t part of Open19’s current focus.

“In Open19 you don’t have to contribute what’s inside your server,” he said. “Potentially, we as LinkedIn will do that, because we don’t see a competitive commercial advantage in actually doing our own servers.” But the likes of HPE, hyve, Flex, QCT, or Inspur have IP to protect, and the foundation doesn’t want that to hamper their participation.

Licenses Selected to Lower Risk

For cases where vendors do want to contribute technology to the project, Open19 has selected various types of licenses for different scenarios, all meant to further reduce friction associated with participation.

The default license for everything other than software, such as specification documents or schematics, is Creative Commons, Brad Biddle, legal counsel for the Open19 Foundation, said in an interview with Data Center Knowledge. Different flavors of CC that provide different levels of control apply depending on document type.

If a collaborative project results in a specification, or another document that others will implement in their own hardware, the parties that created it are required to grant a patent license for the parts they contributed on “RAND-Z terms.” RAND-Z — in which RAND stands for reasonable and non-discriminatory and Z for zero royalty – is a common scheme standards organizations use when somebody’s IP is essential to a standard.

See also: GE Bets on LinkedIn’s Data Center Standard for Predix at the Edge

Open19’s default license for software contributions is the MIT license, one of the most popular open source licenses. “It’s a very simple, permissive-style license, as opposed to copyleft license,” Biddle said. The license is used by popular open source projects such as Ruby on Rails, Node.js, and jQuery, among others. Copyleft licenses, such as the GPL, essentially require that the code, including any modifications, remains open source and compliant with the same license. Permissive-style licenses impose no such restrictions. In other words, if a company takes a piece of open source code from Open19 and modifies it, it doesn’t have to open source the modified version. “We were sensitive to not wanting to force implementers to license away technology as a price of implementation,” Biddle said. “Our default licenses don’t require any licenses back from the technology recipients.”

The same set of default licenses would apply to single-source contributions, such as LinkedIn’s Open19 designs. Biddle said he doesn’t expect the foundation to run ongoing development projects for single-source contributions and has designed that framework for one-off releases.

The foundation’s open source licensing choices and the freedom to participate without having to give away trade secrets are meant to make participation less risky for vendors and whoever else designs hardware or writes code. After all, the initiative’s success will depend on its ability to grow the ecosystem of companies that participate. Growing an open source data center hardware community is a chicken-and-egg puzzle. Unlike the world of open source software, where vendor participation is not a prerequisite for a thriving project, attracting more end users to an open source data center hardware project requires a variety of vendors willing to spend the resources necessary to design and produce the hardware, so end users can be sure they can actually source the technology and source it from multiple suppliers. Conversely, vendors are attracted by end users. Making it easier for vendors to play helps solve half of the puzzle.

Christine Hall contributed to this article.

SEGA’s new SEGA Forever collection brings classic games to mobile for free

The content below is taken from the original (SEGA’s new SEGA Forever collection brings classic games to mobile for free), to continue reading please visit the site. Remember to respect the Author & Copyright.

VIDEO

SEGA is bringing some of your favorite games to mobile in new, free-to-play formats that include ads as a way to drive revenue, support offline play and other more modern features like cloud saves. The games can also be rendered ad-free with a one-time $1.99 purchase, which is a really good deal given the pedigree of some of these titles, and what you might pay elsewhere to get re-released versions of classic console games.

The SEGA Forever collection already has five titles you can get at launch, including Sonic The Hedgehog, Phantasy Star II, Comix Zone, Kid Chameleon and Altered Beast. Each of these will be available on both the Google Play Store and the App Store for iOS devices (with iMessage sticker packs for each included in the bundle).




SEGA’s not stopping with those five, however – the plan is to launch new additions to the collection every two weeks, which should mean you’ll eventually see all your boxes ticked in terms of SEGA console nostalgia. This will expand to cover multiple console generations over time, SEGA says, and icludes both “official emulations and ported games.”

Classic games likely have a finite shelf life, so it makes sense that we’d see companies do whatever they can to extract all of their value before that time runs out. But for gamers, this new model is a welcome change, since it means you can casually enjoy classics without putting down any money at all, and getting the ad-free upgrade isn’t going to break the bank.

HPE teases HPC punters with scalable gear

The content below is taken from the original (HPE teases HPC punters with scalable gear), to continue reading please visit the site. Remember to respect the Author & Copyright.

The first fruit of Hewlett Packard Enterprise’s buy of SGI is set to hit the streets in July with the release of a high performance system – the HPE SGI 8600.

The system is a liquid cooled petascale box assembled on legacy SGI ICE AXA architecture, and is aimed squarely at punters involved with beefy scientific and engineering projects.

The 8600 scales to more than 10,000 nodes without additional switches, and uses integrated switches and hypercube tech that supports arrays of liquid-cooled Nvidia Tesla GPU accelerators, hooked up by NVLink interconnects.

This is one of a flurry of HPC machine teasers the firm went public with this week at ISC in Frankfurt; the others are the HPE Apollo 6000 Gen10 System and the HPE Apollo 10 Series.

The 6000 Gen 10 is an air-cooled, HPC platform giving up to 300 teraflops per rack, and uses ‘silicon of trust’ tech to boost application licensing efficiency, reduce latency, up IOPs muscle and lower power consumption and cooling. At least that is what it said on the tin.

Chemical biz BASF has piloted the system to digitise its chemical research, cutting down on computer simulation and modelling times from months to days, HPE told us.

The Apollo 10 Series is for more price-conscious HPC customers – it is all relative – built for entry level Deep Learning and AI apps that are supposed to be easier to manage and deploy.

The sx40 System is a 1U dual socket Intel Xeo Gen10 server with support for up to 4 Nvidia Tesla SXM2 GPUs with NVLink. The pc40 System is a 1U dual socket Intel Xeon Gen10 server that supports up to 4 PCIe GPU cards.

The systems come with an updated Performance Software Suite that aids management, optimisation and monitoring of the HPC gear.

More details on the spec and prices will be fleshed out in public next month. ®

Code ‘recipes’ from IFTTT help you stay on top of government news

The content below is taken from the original (Code ‘recipes’ from IFTTT help you stay on top of government news), to continue reading please visit the site. Remember to respect the Author & Copyright.

It’s not exactly hard to find publicly available government info and new announcements online. A bunch of new IFTTT recipes (now officially called applets) can make sure you never miss them when they become available, though. The platform has revealed its first group of applets under a new initiative called Data Access Project, and they cover health and travel alerts, the latest news in cybersecurity, economy and other areas. IFTTT recipes follow the "if this happens, then do that" formula — for instance, you can whip up a recipe to send yourself a text whenever Engadget posts on Twitter. That’s also how the Data Access Project applets work.

As you can see in the images above, you can choose to get an email every time a certain government agency announces a new scientific discovery. You can tell IFTTT to make new Evernote updates, push Slack notifications or create Trello cards whenever certain departments makes an announcement, so and so forth. Even if you have no idea how to make IFTTT recipes, you can subscribe to all the Data Access Project Applets by making an account on the platform’s website. If you do know how to make applets, though, you can go wild conjuring up formulas that put your connected devices, such as Philips’ Hue lights, to good use.

IFTTT chief Linden Tibbets said in a statement:

"It’s not that the information isn’t out there — companies, governments, and institutions are releasing information all the time. But for the average person, it’s overwhelming.

We’ve built out services whose data impacts people in very real ways: governments, agencies, non-profits, transits, and other institutions. Now people can easily find, and use, that information in brand new ways. We’re excited to see the response, and plan to expand the Data Access Project with more services in the near future."

Source: IFTTT

Packet, Qualcomm to Host World’s First 10nm Server Processor in Public Cloud for Developers

The content below is taken from the original (Packet, Qualcomm to Host World’s First 10nm Server Processor in Public Cloud for Developers), to continue reading please visit the site. Remember to respect the Author & Copyright.

Packet, a bare metal cloud for developers, announced that it will collaborate with Qualcomm Datacenter Technologies, Inc. to introduce the latest in server architecture innovation on the 48-core Qualcomm Centriq 2400 processor.

The New York City-based company is currently showcasing its consumable cloud platform at Red Hat’s AnsibleFest conference in London, and demonstrating leveraging open source tools such as Ansible, Terraform, Docker and Kubernetes — all running on Qualcomm Datacenter Technologies’ ARM architecture-based servers.

The series of joint efforts will continue at Hashiconf (Austin), Open Source Summit North America (Los Angeles), and AnsibleFest (San Francisco).

“We believe that innovative hardware will be a major contributor to improving application performance over the next few years. Qualcomm Datacenter Technologies is at the bleeding edge of this innovation with the world’s first 10nm server processor,” said Nathan Goulding, Packet’s SVP of Engineering. “With blazing-fast innovation occurring at all levels of software, the simple act of giving developers direct access to hardware is a massive, and very timely, opportunity.”

Packet’s proprietary technology automates physical servers and networks to provide on-demand compute and connectivity, without the use of virtualization or multi-tenancy. The company, which supports both x86 and ARMv8 architectures, provides a global bare metal public cloud from locations in New York, Silicon Valley, Amsterdam, and Tokyo.

“Our collaboration with Packet is the first step of a shared vision to provide an automated, unified experience that will enable users to access and develop directly on the Qualcomm Centriq 2400 chipset,” noted Elsie Wahlig, director of product management at Qualcomm Datacenter Technologies, Inc. “We’re thrilled to work with Packet to engage with more aspects of the open source community.”

While an investment by SoftBank accelerated the company’s access to developments in the ARM server ecosystem, Packet has been active in the developer community since its founding in 2014.

How to boot Windows 10 directly to Advanced Startup Options screen

The content below is taken from the original (How to boot Windows 10 directly to Advanced Startup Options screen), to continue reading please visit the site. Remember to respect the Author & Copyright.

How to boot Windows 10 directly to Advanced Startup Options screen

We know how you can boot into the Advanced Startup Options in Windows 10, when you need to troubleshoot some Windows problems. You can hold down the Shift key and then click Restart from the Power Menu in Start. But what if you would like to display the Advanced Starup Options screen everytime you boot Windows 10? If you would like to, then this post will show you how you can do it.

Boot Windows 10 directly to Advanced Startup Options

To do this, open Command Prompt (Admin) and run the following command:

bcdedit /set {globalsettings} advancedoptions true

Boot Windows 10 directly to Advanced Startup Options

This will turn on the Advanced Startup Options screen on boot.

In case you wish to turn it off anytime, you may run the following command:

bcdedit /set {globalsettings} advancedoptions false

Restart your computer and you will see the familiar blue Advanced Startup Settings screen load up.

Remember that there is no timer available and to continue to your sign-in screen, you will have to press Enter.

If you’d like the legacy Advanced Boot Options screen to load, run the following command and then reboot:

bcdedit /set {default} bootmenupolicy legacy

You will see the black Boot Options screen, like the one you had in Windows 7 and earlier, load up.

To restore the boot menu to the default, run the following command:

bcdedit /set {default} bootmenupolicy standard

Hope this work for you.

Anand Khanse is the Admin of TheWindowsClub.com, a 10-year Microsoft MVP Awardee in Windows (2006-16) & a Windows Insider MVP. Please read the entire post & the comments first, create a System Restore Point before making any changes to your system & be careful about any 3rd-party offers while installing freeware.

Scaleway doubles down on ARM-based cloud servers

The content below is taken from the original (Scaleway doubles down on ARM-based cloud servers), to continue reading please visit the site. Remember to respect the Author & Copyright.

Iliad’s cloud hosting division Scaleway has been betting on ARM chipsets for years because they believe the future of hosting is going to be based on ARM’s processor architecture. The company just launched more powerful ARMv8 options and added more cores to its cheapest options.

If you’re not familiar with processor architecture, your computer and your smartphone use two different chipsets. Your laptop uses an x86 CPU manufactured by Intel or AMD, while your smartphone uses an ARM-based system-on-a-chip.

Back in April, Scaleway launched 64-bit ARM-based cloud servers thanks to Cavium ThunderX systems-on-a-chip. And the most affordable option is crazy cheap. For For €2.99 per month ($3.30), you could get 2 ARMv8 cores and 2GB of RAM, 50GB of SSD with unlimited bandwidth at 200Mbit/s.

With today’s update, Scaleway is doubling the number of cores on this option — you now get 4 cores instead of 2, making it quite competitive with entry-level virtual private servers on DigitalOcean or Linode. The company told me that it could be the best compute-to-price ratio on the market. For €5.99, you now get 6 cores and 4GB of RAM.

And Scaleway also thinks that you should be using ARM-based servers for your demanding tasks as well. You can now get up to 64 cores and up to 128GB of RAM. This beefy option is quite expensive at €279.99 per month, but Scaleway also added a bunch of intermediary options with 16, 32 or 48 cores.

My main complaint remains the same. Scaleway currently has two data centers in Paris and Amsterdam. The company needs to think about opening up new offerings in Asia and the U.S. if it wants to become a serious contender in the highly competitive cloud hosting market.

BabelOn is trying to create Photoshop for your voice

The content below is taken from the original (BabelOn is trying to create Photoshop for your voice), to continue reading please visit the site. Remember to respect the Author & Copyright.

Speech synthesis — the process of artificially creating the human voice — isn’t anything new. But a startup from San Francisco called BabelOn is working on a particularly unique offshoot of this technology. In a nutshell, BabelOn wants to make it a trivial matter to translate your own voice into another language, even if you don’t speak that language yourself. The company says its combo of software and custom-built hardware can analyze what makes up your voice and then use that to recreate speech that sounds just like you, in a language of your choosing.

Initially, the company wants to use its technology for things like improving dubbed films or localizing video games, but eventually it wants to be able to translate your speech in real time, say while you’re on a Skype call. Microsoft has done this for a while, translating Skype voice calls on the fly, but BabelOn promises that its translations will sound like you, not an anonymous Siri- or Cortana-like digital voice.

It’s an intriguing idea, but let’s be clear: It’s very early days for BabelOn. We haven’t seen the software in action, and the company hasn’t booked a client yet. The company is in negotiations with a video game developer to use BabelOn for translating a forthcoming title, but the deal’s not done yet. There’s promise here but also plenty of potential pitfalls, not the least of which is the idea of someone’s voice being "stolen" and used in a way she didn’t consent to.

Though BabelOn isn’t ready just yet, the idea behind it has existed since 2004. Co-founder Daisy Hamilton’s parents had noticed a demand for better language dubbing in the film industry. They received a patent for the core technology behind BabelOn, but the rest of the technology they needed to make this vision a reality wasn’t around yet.

Now, though, the surrounding technologies and hardware are sophisticated enough that BabelOn can begin to put its idea into practice. The core part of the process is creating a BabelOn Language Information Profile, or BLIP. Over the course of about two hours in the company’s San Francisco studio, an individual’s BLIP is created by having them read specific texts in a variety of emotional states.

But BabelOn doesn’t just capture the sound of a voice. Hamilton described it as looking at your body as an instrument. BabelOn’s custom hardware can capture and analyze your breath, how your voice comes out of your chest and throat, how your mouth moves, and a variety of other key factors. "It’s both visual and vocal feedback that’s captured into a single continuous stream," Hamilton said.

Once recorded, BabelOn will be able to take your voice and translate it into other languages and replicate the corresponding emotion that a script calls for, without you needing to go out and record entirely new dialogue. Imagine a game company wanting to localize an English voice-acting performance for other countries; BabelOn could let companies use the same voice actor and digitally create her dialogue rather than having to find a native speaker to rerecord the entire script.

To start, the company is focusing on English, French, Spanish, German, Portuguese, Mandarin, Japanese and Hindi, with additional languages coming down the line based on demand. But it’s important to note that you can’t just type words in English into a computer and have BabelOn do both the voice creation and translation: It needs to be provided with a specific script or input in the language you’re looking to translate to. However, you can specify the desired emotional output of the translated performance; Hamilton called it an "emotional markup language."

As for the hardware itself, it was developed in partnership with the Lawrence Livermore National Laboratory, a federal institution focused on developing science and technology. It’s actually a variation on hardware that’s been in use by the US Department of Defense for unrelated applications. Hamilton didn’t offer up many other details, but eventually the company hopes to set up multiple studios in locations beyond San Francisco.

Hamilton said it takes a few hours to fully process a script and output it in another language. But with further work and processing improvements, she envisions the system working in near-real time. That’s something that would greatly expand BabelOn’s capabilities beyond films and games. Doing a video call that get translated almost instantly with your own voice could make multi-language conversation a lot more personal and expressive.

But the idea of taking BabelOn to consumers brings up a major security challenge. If the technology to create a BLIP becomes more commonplace and the translation software is used in more applications, it’s easy to imagine voice data being an appealing target for hackers who want to literally put words in someone’s mouth. Hamilton noted that the company has an ethics board to head off potential misuse, but that doesn’t solve the security challenge of keeping your voice safe.

Hamilton addressed those concerns, noting that BabelOn will "use a highly encrypted offline voice vault to store all of the BLIP, which would be curated upon request of the [original] speaker." Offline storage would certainly make this harder to crack in to, and Hamilton also noted that BLIPs would have a reference visual cue that indicates when voices and languages have been altered. It’s still not clear how this will scale if the service becomes popular, but it’s something BabelOn is aware of. "Security of BLIPs is massively important to us, as we’d never want to threaten someone’s vocal authenticity," she said.

Security is the kind of challenge that could keep BabelOn from ever being something consumers can use. For people recording dialogue in a movie or game, their BLIP could be destroyed when the work is done. But a tool that can capture and then create language using someone’s voice in real time is basically unheard of and something that could be a huge target for hackers.

BabelOn’s introduction to the public is via an Indiegogo campaign — a strange choice given that the technology isn’t directed at consumers. Hamilton said its purpose is to get funds to extend a software license the company needs to finish its own work. But she also stressed that they have backup plans in place if the campaign doesn’t meet its goal. "It’s just as much about using Indiegogo as a launch pad to put BabelOn out in the world," Hamilton said.

Hamilton hopes and expects that BabelOn will have its first client soon. If it can get a video game made with BabelOn, it’ll give the company a concrete example of its technology to court other clients and push development forward — but until then, we’re still in the theoretical realm. It’s way too early to know whether this technology will take off with the movie and game companies BabelOn is targeting, let alone whether we’ll see it in consumer-focused products some years down the line.

Source: BabelOn (Indiegogo)

Self Driving Potato Hits the Road

The content below is taken from the original (Self Driving Potato Hits the Road), to continue reading please visit the site. Remember to respect the Author & Copyright.

Self Driving Potato Hits the Road

Potatoes deserve to roam the earth, so [Marek Baczynski] created the first self-driving potato, ushering in a new era of potato rights. Potato batteries have been around forever. Anyone who’s played Portal 2 knows that with a copper and zinc electrode, you can get a bit of current out of a potato. Tubers have been powering clocks for decades in science classrooms around the world. It’s time for something — revolutionary.

[Marek] knew that powering a timepiece wasn’t enough for his potato, so he picked up a Texas Instruments BQ25504 boost converter energy harvesting chip. A potato can output around 0.4 V at 0.6 mA. The 25504 uses this power to slowly charge a capacitor. Every fifteen minutes or so, enough energy is stored to power a motor for a short time. [Marek] built a car for his potato — or more fittingly, he built his potato into a car.

The starch-powered capacitor moves the potato car about 8 cm per cycle. Over the course of a day, the potato can travel around 7.5 meters. Not very far, but hey, that’s further than the average potato travels on its own power. Of course, any traveling potato needs a name, so [Marek] dubbed his new pet “Pontus”. Check out the video after the break to see the ultimate fate of poor Pontus.

Now that potatoes are mobile, we’re going to need a potato detection system. Humanity’s only hope is to fight fire with fire – break out the potato cannons!

VIDEO

Posted in classic hacksTagged , , , ,

IBM-powered DNA sequencing could find bacteria in raw milk

The content below is taken from the original (IBM-powered DNA sequencing could find bacteria in raw milk), to continue reading please visit the site. Remember to respect the Author & Copyright.

Babies love milk. Adults love milk-based products. You know what else loves milk? Good and bad bacteria. It’s the ideal medium for bacteria growth and could cause various food-borne illnesses, especially if consumed in raw, unpasteurized form. Researchers typically just test the milk supply in the US for specific pathogens or harmful bacteria and viruses, but IBM and Cornell University want to take things a step further. They plan to create new analytical tools that can monitor raw milk — that’s milk straight out of the udder — and instantly detect any anomaly that could turn out to be a food safety hazard.

To be able to build those tools, they first need to be intimately familiar with the substance and the microorganisms that tend to contaminate it. They’ll sequence and analyze the DNA and RNA of dairy samples from Cornell’s farm, as well as of all the microorganisms in environments milk tends to make contact with, including the cows themselves, from the moment it’s pumped. Their tests will characterize what’s "normal" for raw milk, so the tools they make can easily tell if something’s wrong even if it’s an unknown contaminant we’ve never seen before.

This project however, is just the beginning. They plan to apply what they learn to other types of produce and ingredients in the future in order to ensure that they’re safe for consumption, especially if they were imported from abroad. Martin Wiedmann, Gellert Family Professor in Food Safety, from Cornell University said in a statement:

"As nature’s most perfect food, milk is an excellent model for studying the genetics of food. As a leader in genomics research, the Department of Food Science expects this research collaboration with IBM will lead to exciting opportunities to apply findings to multiple food products in locations worldwide."