Pi 3 booting part II: Ethernet

The content below is taken from the original (Pi 3 booting part II: Ethernet), to continue reading please visit the site. Remember to respect the Author & Copyright.

Yesterday, we introduced the first of two new boot modes which have now been added to the Raspberry Pi 3. Today, we introduce an even more exciting addition: network booting a Raspberry Pi with no SD card.

Again, rather than go through a description of the boot mode here, we’ve written a fairly comprehensive guide on the Raspberry Pi documentation pages, and you can find a tutorial to get you started here. Below are answers to what we think will be common questions, and a look at some limitations of the boot mode.

Note: this is still in beta testing and uses the “next” branch of the firmware. If you’re unsure about using the new boot modes, it’s probably best to wait until we release it fully.

What is network booting?

Network booting is a computer’s ability to load all its software over a network. This is useful in a number of cases, such as remotely operated systems or those in data centres; network booting means they can be updated, upgraded, and completely re-imaged, without anyone having to touch the device!

The main advantages when it comes to the Raspberry Pi are:

  1. SD cards are difficult to make reliable unless they are treated well; they must be powered down correctly, for example. A Network File System (NFS) is much better in this respect, and is easy to fix remotely.
  2. NFS file systems can be shared between multiple Raspberry Pis, meaning that you only have to update and upgrade a single Pi, and are then able to share users in a single file system.
  3. Network booting allows for completely headless Pis with no external access required. The only desirable addition would be an externally controlled power supply.

I’ve tried doing things like this before and it’s really hard editing DHCP configurations!

It can be quite difficult to edit DHCP configurations to allow your Raspberry Pi to boot, while not breaking the whole network in the process. Because of this, and thanks to input from Andrew Mulholland, I added the support of proxy DHCP as used with PXE booting computers.

What’s proxy DHCP and why does it make it easier?

Standard DHCP is the protocol that gives a system an IP address when it powers up. It’s one of the most important protocols, because it allows all the different systems to coexist. The problem is that if you edit the DHCP configuration, you can easily break your network.

So proxy DHCP is a special protocol: instead of handing out IP addresses, it only hands out the TFTP server address. This means it will only reply to devices trying to do netboot. This is much easier to enable and manage, because we’ve given you a tutorial!

Are there any bugs?

At the moment we know of three problems which need to be worked around:

  • When the boot ROM enables the Ethernet link, it first waits for the link to come up, then sends its first DHCP request packet. This is sometimes too quick for the switch to which the Raspberry Pi is connected: we believe that the switch may throw away packets it receives very soon after the link first comes up.
  • The second bug is in the retransmission of the DHCP packet: the retransmission loop is not timing out correctly, so the DHCP packet will not be retransmitted.

The solution to both these problems is to find a suitable switch which works with the Raspberry Pi boot system. We have been using a Netgear GS108 without a problem.

  • Finally, the failing timeout has a knock-on effect. This means it can require the occasional random packet to wake it up again, so having the Raspberry Pi network wired up to a general network with lots of other computers actually helps!

Can I use network boot with Raspberry Pi / Pi 2?

Unfortunately, because the code is actually in the boot ROM, this won’t work with Pi 1, Pi B+, Pi 2, and Pi Zero. But as with the MSD instructions, there’s a special mode in which you can copy the ‘next’ firmware bootcode.bin to an SD card on its own, and then it will try and boot from the network.

This is also useful if you’re having trouble with the bugs above, since I’ve fixed them in the bootcode.bin implementation.

Here’s a video of the setup working from Mythic Beasts, our web hosts, who are hoping to use this mode to offer hosted Pis in the data centre for users soon.

Raspberry Pi, Power over ethernet, boot over ethernet

Booting a Raspberry PI 3 over ethernet, powered over ethernet. No SD cards were harmed.

Finally, I would like to thank my Slack beta testing team who provided a great testing resource for this work. It’s been a fun few weeks! Thanks in particular to Rolf Bakker for this current handy status reference…

Current state of network boot on all Pis

Current state of network boot on all Pis

The post Pi 3 booting part II: Ethernet appeared first on Raspberry Pi.

Autodesk Acquires Eagle for PCB Design

The content below is taken from the original (Autodesk Acquires Eagle for PCB Design), to continue reading please visit the site. Remember to respect the Author & Copyright.

lkAutodesk expands its cadre of useful digital tools by acquiring EAGLE, a popular free to use print circuit board (PCB) site.

Read more on MAKE

The post Autodesk Acquires Eagle for PCB Design appeared first on Make: DIY Projects and Ideas for Makers.

Microsoft’s New Excel API Is A Leap Forward For The Spreadsheet Application

The content below is taken from the original (Microsoft’s New Excel API Is A Leap Forward For The Spreadsheet Application), to continue reading please visit the site. Remember to respect the Author & Copyright.

Office 365 Hero

Office 365 Hero

Microsoft is announcing today the general availability of its Excel API and on the surface, this seems like another simple feature for Office 365 but under-the-hood, this is a powerful update. Starting today, developers can use the Excel REST API to incorporate complex calculations easily into applications utilizing this new tool.

In the corporate world, Excel is a fundamental app that is used in everything from financial reporting and forecasting sales for the upcoming quarter to keeping track of inventory. Quite frankly, many smaller companies are using Excel to operate their entire business and baked inside of these spreadsheets are complex computations that often need to be utilized for other applications to help make business decisions.

The Excel API is a tool that developers can now use to access Excel data inside of spreadsheets which means that complex models no longer have to be rebuilt inside of applications. This is a significant improvement to the process flow of utilizing data inside of the Microsoft Graph as it reduces the likelihood of an error being coded into a model and reduces the time to build an application as you no longer need to build the model in the first place.

The extensibility that this API brings to Excel will further solidify its position in the corporate world as an indispensable tool for a variety of scenarios. Considering that the API is just now making its way into a production-ready state, it will be interesting to see how quickly developers make use of this new tool to extend the reach of the spreadsheet application and how this feature will change the rate at which new applications are built.

If you are looking to build an application using the new API, you can find code samples and documentation here.

The post Microsoft’s New Excel API Is A Leap Forward For The Spreadsheet Application appeared first on Petri.

Microsoft boosts PKI, ISO certs to harden Azure cloud

The content below is taken from the original (Microsoft boosts PKI, ISO certs to harden Azure cloud), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft has bumped up security for its Azure cloud platform by adding support for X.509 certificates for device-level authentication, and bagging an ISO integrity ticket.

Adding X.509 means Microsoft thinks its cloud will be better at handling internet-of-things traffic to the Azure IoT Hub, according to Azure partner director Sam George.

“With Azure IoT support for X.509 certificates, an IoT device can now store a private key locally, and an associated device X.509 certificate generated to identify the device to Azure IoT Hub before the information is transmitted,” George says.

“The benefit to customers in industries such as manufacturing, healthcare and smart cities is that device identity can be transmitted safely and securely from the edge to the cloud while maintaining integrity.”

Redmond has also released reading material including an IoT security guide.

Azure senior director Alice Rison is singing praises over Microsoft’s scoring of the ISO 27017:2015 cloud security certificate.

She says it is awarded thanks to Redmond’s compliance with 44 cloud risk and threat model controls.

“Both cloud service providers and cloud service customers can leverage this guidance to effectively design and implement cloud computing information security controls,” Rison says.

“Customers can download the ISO/IEC 27017 certificate which demonstrates Microsoft’s continuous commitment to providing a secure and compliant cloud environment for our customers.” ®

Sponsored:
2016 Cyberthreat defense report

Internet of things early adopters share 4 key takeaways

The content below is taken from the original (Internet of things early adopters share 4 key takeaways), to continue reading please visit the site. Remember to respect the Author & Copyright.


ARI Fleet Management manages 1.2 million things with wheels across North America and Europe, from telephone company trucks to corporate vehicles to railroad maintenance trucks.


The telematics sensors on its rolling stock of vehicles capture data between every three and 30 seconds. “Every two weeks, we get the equivalent of all the data we’ve accumulated in the last twenty years,” says Bill Powell, director of enterprise architecture for the Mt. Laurel, N.J.-based firm.


It’s a carousel of information: ARI can tell from its gyroscopic sensors if drivers are jackrabbiting from stops or slamming on their brakes; it can tell from engine sensors that they’re letting the engines idle too long.


One of the most intriguing and granular pieces in all these terabytes of data is the one that compares where a gas credit card was used, based on the geocode of the vendor, and where the vehicle was at the time. If the differential is more than 20 feet, an ARI audit can show that someone was fueling an unauthorized vehicle.



As that example shows, the internet of things (IoT) isn’t just about sensors and data, it’s about using data in context. That makes it an interdisciplinary challenge for IT executives, one that encompasses information technology, operations and business processes.



GE Aviation Internet of Things

GE Aviation captures hundreds of megabytes of data anytime a plane lands. The company has created a data lake to analyze that information, alongside scheduling data, weather data, airport curfews and other resources. These advanced analytics lead to increased performance for its customers, which include United Airlines and Southwest Airlines.



How can organizations develop and launch IoT initiatives that truly transform the business from the ground up? Computerworld spoke to a number of early IoT adopters, entities that have gotten their hands dirty in everything from manufacturing and logistics to smart cities and agriculture. Almost all report bumps along the way, but also say they have either achieved or anticipate significant payoffs from their investment. With IoT at last becoming a force in the enterprise, here are four lessons to heed.

Pi 3 booting part I: USB mass storage boot beta

The content below is taken from the original (Pi 3 booting part I: USB mass storage boot beta), to continue reading please visit the site. Remember to respect the Author & Copyright.

When we originally announced the Raspberry Pi 3, we announced that we’d implemented several new boot modes. The first of these is the USB mass storage boot mode, and we’ll explain a little bit about it in this post; stay tuned for the next part on booting over Ethernet tomorrow. We’ve also supplied a boot modes tutorial over on the Raspberry Pi documentation pages.

Note: the new boot modes are still in beta testing and use the “next” branch of the firmware. If you’re unsure about using the new boot modes, it’s probably best to wait until we release it fully.

How did we do this?

Inside the 2835/6/7 devices there’s a small boot ROM, which is an unchanging bit of code used to boot the device. It’s the boot ROM that can read files from SD cards and execute them. Previously, there were two boot modes: SD boot and USB device boot (used for booting the Compute Module). When the Pi is powered up or rebooted, it tries to talk to an attached SD card and looks for a file called bootcode.bin; if it finds it, then it loads it into memory and jumps to it. This piece of code then continues to load up the rest of the Pi system, such as the firmware and ARM kernel.

While squeezing in the Quad A53 processors, I spent a fair amount of time writing some new boot modes. If you’d like to get into a little more detail, there’s more information in the documentation. Needless to say, it’s not easy squeezing SD boot, eMMC boot, SPI boot, NAND flash, FAT filesystem, GUID and MBR partitions, USB device, USB host, Ethernet device, and mass storage device support into a mere 32kB.

What is a mass storage device?

The USB specification allows for a mass storage class which many devices implement, from the humble flash drive to USB attached hard drives. This includes micro SD readers, but generally it refers to anything you can plug into a computer’s USB port and use for file storage.

I’ve tried plugging in a flash drive before and it didn’t do anything. What’s wrong? 

We haven’t enabled this boot mode by default, because we first wanted to check that it worked as expected. The boot modes are enabled in One-Time Programmable (OTP) memory, so you have to enable the boot mode on your Pi 3 first. This is done using a config.txt parameter.

Instructions for implementing the mass storage boot mode, and changing a suitable Raspbian image to boot from a flash drive, can be found here.

Are there any bugs / problems?

There are a couple of known issues:

  1. Some flash drives power up too slowly. There are many spinning disk drives that don’t respond within the allotted two seconds. It’s possible to extend this timeout to five seconds, but there are devices that fail to respond within this period as well, such as the Verbatim PinStripe 64GB.
  2. Some flash drives have a very specific protocol requirement that we don’t handle; as a result of this, we can’t talk to these drives correctly. An example of such a drive would be the Kingston Data Traveller 100 G3 32G.

These bugs exist due to the method used to develop the boot code and squeeze it into 32kB. It simply wasn’t possible to run comprehensive tests.

However, thanks to a thorough search of eBay and some rigorous testing by our awesome work experience student Henry Budden, we’ve found the following devices work perfectly well:

  • Sandisk Cruzer Fit 16GB
  • Sandisk Cruzer Blade 16Gb
  • Samsung 32GB USB 3.0 drive
  • MeCo 16GB USB 3.0

If you find some devices we haven’t been able to test, we’d be grateful if you’d let us know your results in the comments.

Will it be possible to boot a Pi 1 or Pi 2 using MSD?

Unfortunately not. The boot code is stored in the BCM2837 device only, so the Pi 1, Pi 2, and Pi Zero will all require SD cards.

However, I have been able to boot a Pi 1 and Pi 2 using a very special SD card that only contains the single file bootcode.bin. This is useful if you want to boot a Pi from USB, but don’t want the possible unreliability of an SD card. Don’t mount the SD card from Linux, and it will never get corrupted!

My MSD doesn’t work. Is there something else I can do to get it working?

If you can’t boot from the MSD, then there are some steps that you can take to diagnose the problem. Please note, though, this is very much still a work in progress:

  • Format an SD card as FAT32
  • Copy the current next branch bootcode.bin from GitHub onto the SD card
  • Plug it into the Pi and try again

If this still doesn’t work, please open an issue in the firmware repository.

 

 

The post Pi 3 booting part I: USB mass storage boot beta appeared first on Raspberry Pi.

Microsoft: a Gartner cloud computing leader across IaaS, PaaS, and SaaS

The content below is taken from the original (Microsoft: a Gartner cloud computing leader across IaaS, PaaS, and SaaS), to continue reading please visit the site. Remember to respect the Author & Copyright.

CIOs no longer ask whether they should use cloud, but rather how. According to IDC, seventy percent of CIOs will embrace a cloud-first strategy in 2016. By partnering closely with customers around the world, we see the natural path to enterprise cloud adoption — starting with software services like email and collaboration, then moving to infrastructure for storage, compute and networking and finally embracing platform services to transform business agility and customer engagements. In this journey to adopt the cloud, customers are looking for a vendor who understands and leads in meeting the broad spectrum of their cloud needs.

Today, Gartner has named Microsoft Azure as a leader in its Magic Quadrant for Cloud Infrastructure as a Service for the third year in a row based on completeness of our vision and ability to execute. We are honored by this continued recognition as we are relentless about our commitment and rapid pace of innovation for infrastructure services. With the G series, Azure led with the largest VMs in the cloud and we continue to deliver market leading performance with our recent announcement supporting SAP HANA workloads up to 32 TB. And while Azure is a world class cloud platform for Windows, it’s also recognized for industry-leading support for Linux and other open source technologies. Today, nearly one in three VMs deployed on Azure are Linux. Strong momentum for Linux and open source is driven by customers using Azure for business applications and modern application architectures, including containers and big data solutions. With over sixty percent of the 3,800 solutions in Azure Marketplace built on Linux, including popular open source images by Ubuntu, CoreOS, Bitnami, Oracle, DataStax, Red Hat and others, it’s exciting that many open source vendors considered Microsoft one of the best cloud partners.

While we are proud of our continued leadership in cloud infrastructure, we are committed to delivering the breadth and depth of cloud solutions to support our customers’ natural path to cloud adoption. Microsoft is the only vendor recognized as a leader across Gartner’s Magic Quadrants for IaaS, PaaS and SaaS solutions for enterprise cloud workloads. We are in a unique position with our extensive portfolio of cloud offerings designed for the needs of enterprises, including Software as a Service (SaaS) offerings like Office 365, CRM Online and Power BI and Azure Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). And Microsoft’s cloud vision is a unified story that we’re executing on with the same datacenter regions, compliance commitments, operational model, billing, support and more. The ability to deploy and use applications close to data with consistent identity and a shared ecosystem, means greater efficiency, less complexity, and cost savings.

Many of our customers embrace Identity as a first step in moving to the cloud. Office 365 and Azure share the same identity system with Azure Active Directory therefore providing a simple, friction free experience for our customers. And with Office 365 commercial customers surpassing 70 million monthly active users, Azure adoption is quickly following suit. Once in Azure, customers tend to start with IaaS and then quickly extend to using both IaaS and PaaS models to optimize productivity and embrace new opportunities for business differentiation. Today fifty-five percent of Azure IaaS customers are also deploying PaaS.

The following table summarizes vendors in the leader quadrant across Gartner MQs for IaaS, PaaS and SaaS solutions for key enterprise cloud workloads.

Leader Quadrant Vendors

The true power of Azure is enabling our customers and partners on their cloud journey to realize their unique business goals. Customers and partners like Fruit of the Loom and Boomerang demonstrate this common need and cloud adoption path from Software as a Service (SaaS) to Infrastructure as a Service (IaaS) to Platform as a Service (PaaS).

  • Fruit of the Loom: Office 365 was their “runway” to Azure. Success with Office 365 deployment has led to use of Azure infrastructure and its platform services as they moved their consumer-facing website fruit.com to Azure. To gain insight into how they should market and package their products, Fruit of the Loom is also leveraging platform services such as Azure Machine Learning.
  • Boomerang: An Office 365 ISV takes advantage of Azure to create productivity solutions within Outlook. A key feature for Boomerang is its ability to generate real-time calendar images that are shareable with people outside of the user’s organization. Boomerang relies on Azure’s enterprise-proven infrastructure to support this computationally demanding workload. Their experience with Office 365 led them to look more closely at Azure, and they have started to migrate services from AWS to Azure to leverage Azure’s platform services and Machine Learning capabilities.

We look forward to delivering more on this vision across our portfolio of cloud offerings to our customers and partners. If you’d like to read the full report, “Gartner: Magic Quadrant for Infrastructure as a Service,” you can request it here.

 

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

OpenWrt router card plugs into a mini-PCIe socket

The content below is taken from the original (OpenWrt router card plugs into a mini-PCIe socket), to continue reading please visit the site. Remember to respect the Author & Copyright.

AsiaRF’s “AP620-MPE-1” is an OpenWrt WiFi-ac router based on Mediatek’s MT7620A. It plugs into a mini-PCIe slot, but is somewhat non-standard mechanically. Unlike many other OpenWrt Linux router boards running on MIPS-based WiFi chipsets, AsiaRF’s AP7620-MPE-1 board is not a standalone SBC, but an add-on card that give its host WiFi-ac routing capabilities. AP7620-MPE-1 (click […]

How fog computing pushes IoT intelligence to the edge

The content below is taken from the original (How fog computing pushes IoT intelligence to the edge), to continue reading please visit the site. Remember to respect the Author & Copyright.

As the Internet of Things evolves into the Internet of Everything and expands its reach into virtually every domain, high-speed data processing, analytics and shorter response times are becoming more necessary than ever. Meeting these requirements is somewhat problematic through the current centralized, cloud-based model powering IoT systems, but can be made possible through fog computing, a decentralized architectural pattern that brings computing resources and application services closer to the edge, the most logical and efficient spot in the continuum between the data source and the cloud.

The term fog computing, coined by Cisco, refers to the need for bringing the advantages and power of cloud computing closer to where the data is being generated and acted upon. Fog computing reduces the amount of data that is transferred to the cloud for processing and analysis, while also improving security, a major concern in the IoT industry.

Here is how transitioning from the cloud to the fog can help deal with the current and future challenges of the IoT industry.

The problem with the cloud

The IoT owes its explosive growth to the connection of physical things and operation technologies (OT) to analytics and machine learning applications, which can help glean insights from device-generated data and enable devices to make “smart” decisions without human intervention. Currently, such resources are mostly being provided by cloud service providers, where the computation and storage capacity exists.

However, despite its power, the cloud model is not applicable to environments where operations are time-critical or internet connectivity is poor. This is especially true in scenarios such as telemedicine and patient care, where milliseconds can have fatal consequences. The same can be said about vehicle to vehicle communications, where the prevention of collisions and accidents can’t afford the latency caused by the roundtrip to the cloud server. The cloud paradigm is like having your brain command your limbs from miles away — it won’t help you where you need quick reflexes.

The cloud paradigm is like having your brain command your limbs from miles away.

Moreover, having every device connected to the cloud and sending raw data over the internet can have privacy, security and legal implications, especially when dealing with sensitive data that is subject to separate regulations in different countries.

The fog placed at the perfect position

IoT nodes are closer to the action, but for the moment, they do not have the computing and storage resources to perform analytics and machine learning tasks. Cloud servers, on the other hand, have the horsepower, but are too far away to process data and respond in time.

The fog layer is the perfect junction where there are enough compute, storage and networking resources to mimic cloud capabilities at the edge and support the local ingestion of data and the quick turnaround of results.

A study by IDC estimates that by 2020, 10 percent of the world’s data will be produced by edge devices. This will further drive the need for more efficient fog computing solutions that provide low latency and holistic intelligence simultaneously.

Fog computing has its own supporting body, the OpenFog Consortium, founded in November 2015, whose mission is to drive industry and academic leadership in fog computing architecture. The consortium offers reference architectures, guides, samples and SDKs that help developers and IT teams understand the true value of fog computing.

Already, mainstream hardware manufacturers such as Cisco, Dell and Intel are teaming up with IoT analytics and machine learning vendors to deliver IoT gateways and routers that can support fog computing. An example is Cisco’s recent acquisition of IoT analytics company ParStream and IoT platform provider Jasper, which will enable the network giant to embed better computing capabilities into its networking gear and grab a bigger share of the enterprise IoT market, where fog computing is most crucial.

Analytics software companies are also scaling products and developing new tools for edge computing. Apache Spark is an example of a data processing framework based on the Hadoop ecosystem that is suitable for real-time processing of edge-generated data.

Insights obtained by the cloud can help update and tweak policies and functionality at the fog layer.

Other major players in the IoT industry are also placing their bets on the growth of fog computing. Microsoft, whose Azure IoT is one of the leading enterprise IoT cloud platforms, is aiming to secure its dominance over fog computing by pushing its Windows 10 IoT to become the OS of choice for IoT gateways and other high-end edge devices that will be the central focus of fog computing.

Does the fog eliminate the cloud?

Fog computing improves efficiency and reduces the amount of data that needs to be sent to the cloud for processing. But it’s here to complement the cloud, not replace it.

The cloud will continue to have a pertinent role in the IoT cycle. In fact, with fog computing shouldering the burden of short-term analytics at the edge, cloud resources will be freed to take on the heavier tasks, especially where the analysis of historical data and large datasets is concerned. Insights obtained by the cloud can help update and tweak policies and functionality at the fog layer.

And there are still many cases where the centralized, highly efficient computing infrastructure of the cloud will outperform decentralized systems in performance, scalability and costs. This includes environments where data needs to be analyzed from largely dispersed sources.

It is the combination of fog and cloud computing that will accelerate the adoption of IoT, especially for the enterprise.

What are the use cases of fog computing?

The applications of fog computing are many, and it is powering crucial parts of IoT ecosystems, especially in industrial environments.

Thanks to the power of fog computing, New York-based renewable energy company Envision has been able to obtain a 15 percent productivity improvement from the vast network of wind turbines it operates.

The company is processing as much as 20 terabytes of data at a time, generated by 3 million sensors installed on the 20,000 turbines it manages. Moving computation to the edge has enabled Envision to cut down data analysis time from 10 minutes to mere seconds, providing them with actionable insights and significant business benefits.

IoT company Plat One is another firm using fog computing to improve data processing for the more than 1 million sensors it manages. The company uses the ParStream platform to publish real-time sensor measurements for hundreds of thousands of devices, including smart lighting and parking, port and transportation management and a network of 50,000 coffee machines.

Fog computing also has several use cases in smart cities. In Palo Alto, California, a $3 million project will enable traffic lights to integrate with connected vehicles, hopefully creating a future in which people won’t be waiting in their cars at empty intersections for no reason.

In transportation, it’s helping semi-autonomous cars assist drivers in avoiding distraction and veering off the road by providing real-time analytics and decisions on driving patterns.

It also can help reduce the transfer of gigantic volumes of audio and video recordings generated by police dashboard and video cameras. Cameras equipped with edge computing capabilities could analyze video feeds in real time and only send relevant data to the cloud when necessary.

What is the future of fog computing?

The current trend shows that fog computing will continue to grow in usage and importance as the Internet of Things expands and conquers new grounds. With inexpensive, low-power processing and storage becoming more available, we can expect computation to move even closer to the edge and become ingrained in the same devices that are generating the data, creating even greater possibilities for inter-device intelligence and interactions. Sensors that only log data might one day become a thing of the past.

Featured Image: Omelchenko/Shutterstock

How to solve Windows 10 crashes in less than a minute

The content below is taken from the original (How to solve Windows 10 crashes in less than a minute), to continue reading please visit the site. Remember to respect the Author & Copyright.

When I began to work with Windows 10, I was able to shut the laptop down without Googling to find the power button icon; a great improvement over Windows 8. My next interest was determining what to do when the OS falls over, generating a Blue Screen of Death. This article will describe how to set your system up so that, when it does, you’ll be able to find the cause of most crashes in less than a minute for no cost.

acer bsod test

In Windows 10, the Blue Screen looks the same as in Windows 8/8.1. It’s that screen with the frown emoticon and the message “Your PC ran into a problem . . .” This screen appears more friendly than the original Blue Screens, but a truly friendly screen would tell you what caused the problem and how to fix it; something that would not be difficult since most BSODs are caused by misbehaved third party drivers that are often easily identified by the MS Windows debugger.

+ For earlier versions of the OS, refer to the following:            

Windows 8: (Article) How to solve Windows 8 crashes in less than a minute         
(Slide show) How to solve Windows 8 crashes
Windows 7:
  Solve Windows 7 crashes in minutes
Windows XP/2000:
How to solve Windows crashes in minutes

Just to be clear, this article deals with system crashes, not application crashes or system hangs. In a full system crash, the operating system has concluded that something has gone so wrong (such as memory corruption) that continued operation could cause serious or catastrophic results. Therefore, the OS attempts to shut down as cleanly as possible – saving system state information in the process – then restarts (if set to do so) as a refreshed environment and with debug information ready to be analyzed.

Why Windows 10 crashes

To be sure, Windows has grown in features and size since its introduction in 1985 and has become more stable along the way. Nevertheless, and in spite of the protection mechanisms built in to the OS, crashes still happen.

Once known as the Ring Protection Scheme, Windows 10 operates in both User Mode (Ring 3) and Kernel Mode (Ring 0). The idea is simple; run core operating system code and device drivers in Kernel Mode and software applications and user mode drivers in User Mode. For applications to access the services of the OS and the hardware, they must call upon Windows services that act as proxies. Thus, by blocking User Mode code from having direct access to Kernel Mode, OS operations are generally well protected.

The problem is when Kernel Mode code goes awry. In most cases, it is third-party drivers living in Kernel Mode that make erroneous calls, such as to non-existent memory or to overwrite OS code, that result in system failures. And, yes, it is true that Window itself is seldom at fault.

Where to get help with Windows 10 crashes

There are plenty of places to turn to for help with BSODs, a few of which are listed below. For example, ConfigSafe tells you what drivers have changed and AutorunCheck tells you what Windows Autorun settings have changed. Both help nail the culprit in a system failure. And everyone should have the book Windows Internals; it is the bible that every network admin and CIO should turn to, especially Chapter 14 “Crash Dump Analysis,” which is in Part 2 of the book.

When I asked Mark Russinovich, one of the authors, why a network admin or CIO – as opposed to a programmer – should read it, he said, “If you’re managing Windows systems and don’t know the difference between a process and a thread, how Windows manages virtual and physical memory, or how kernel-mode drivers can crash a system, you’re handicapping yourself. Understanding these concepts is critical to fully understanding crash dumps and being able to decipher their clues.”

So, while WinDbg provides the data about the state of a system when it fell over, Windows Internals turns that cryptic data into actionable information that helps you resolve the cause.

WHERE TO FIND BSOD HELP

Name Type Location
About.com Guide: http://abt.cm/2apRVO9
AutorunCheck Tool: http://bit.ly/2apSuI9
CNET Form: http://cnet.co/2aqVdUA
ConfigSafe Tool: http://bit.ly/2aqUenB
Experts-Exchange Help Site: http://bit.ly/2apSaJ4
FiretowerGuard Tool: http://bit.ly/2aqUoeK
Windows 10 Forums Forum: http://bit.ly/2apRROE
Microsoft Autoruns Tool: http://bit.ly/2aqUByx
Microsoft DaRT Tool: http://bit.ly/2apS7gq
TechNet Forum: http://bit.ly/2aqU3sv
TenForums Forum: http://bit.ly/2apRXFP
WhoCrashed Tool: http://bit.ly/2aqUUtf
WinDbg Tool: http://bit.ly/2apSdEX
Windows Internals Book: http://bit.ly/2aqUvah
WindowsSecrets Forum: http://bit.ly/2aqUEtY

What is a memory dump?

A memory dump is a copy or a snapshot of the contents of a system’s memory at the point of a system crash. Dump files are important because they can show who was doing what at the point the system fell over. Dump files are, by the nature of their contents, difficult to decipher unless you know what to look for.

Windows 10 can produce five types of memory dump files, each of which are described below.

1.     Automatic Memory Dump

Location:%SystemRoot%\Memory.dmp
Size: Size of OS kernel

The Automatic memory dump is the default option selected when you install Windows 10. It was created to support the “System Managed” page file configuration which has been updated to reduce the page file size on disk, primarily for small SSDs, but will also benefit servers with large amounts of RAM. The Automatic memory dump option produces a Kernel memory dump; the difference is when you select Automatic it allows the SMSS process to reduce the page file smaller than the size of RAM.

To check or edit the system paging file size, go to the following:

Windows 10 button | Control Panel | System and Security | System | Advanced system settings | Performance | Settings | Advanced | Change

startup and recovery

2. Active Memory Dump

Location: %SystemRoot%\Memory.dmp
Size: Triple the size of a kernel or automatic dump file

The Active memory dump is a recent feature from Microsoft. While much smaller than a complete memory dump, it is probably three times the size of a kernel dump. This is because it includes both the kernel and the user space. On my test system with 4GB RAM running Windows 10 on an Intel Core i7 64-bit processor the Active dump was about 1.5GB. Since, on occasion, dump files have to be transported I compressed it, which brought it down to about 500MB.

3. Complete Memory Dump

Location: %SystemRoot%\Memory.dmp
Size: Installed RAM plus 1MB

A complete (or full) memory dump is the largest dump file because it includes all of the physical memory that is used by the Windows OS. You can assume that the file will be about equal to the installed RAM. With many systems having multiple GBs, this can quickly become a storage issue, especially if you are having more than the occasional crash. Generally speaking, stick to the automatic dump file.

4. Kernel Memory Dump

Location:   %SystemRoot%\Memory.dmp
Size: ≈size of physical memory “owned” by kernel-mode components

Kernel dumps are roughly equal in size to the RAM occupied by the Windows 10 kernel, about 700MB on my test system. Compression brought it down nearly 80% to 150MB. One advantage of a kernel dump is that it contains the binaries which are needed for analysis. The Automatic dump setting creates a kernel dump file by default, saving only the most recent, as well as a minidump for each event.

5. Small Memory Dump (a.k.a. a mini dump)

Location: %SystemRoot%\Minidump
Size: At least 64K on x86 and 128k on x64 (279K on my W10 test PC)

Minidumps include memory pages pointed to them by registers given their values at the point of the fault, as well as the stack of the faulting thread. What makes them small is that they do not contain any of the binary or executable files that were in memory at the time of the failure. However, those files are critically important for subsequent analysis by the debugger.

As long as you are debugging on the machine that created the dump file, WinDbg can find them in the System Root folders (unless the binaries were changed by a system update after the dump file was created). Alternatively, the debugger should be able to locate them automatically through SymServ, Microsoft’s online store of symbol files. Unless changed by a user, Windows 10 is normally set to create the automatic dump file for the most recent event and a minidump for every crash event, providing an historic record of all system crash events for the life of the system.

10 Tricks to Make Yourself a Google Drive Master

The content below is taken from the original (10 Tricks to Make Yourself a Google Drive Master), to continue reading please visit the site. Remember to respect the Author & Copyright.

Think you know Google’s online productivity suite back to front? Whether you’ve been using Google Drive for five minutes or five years, there’s always more to learn, and in that spirit we present 10 valuable tips and tricks for mastering the service.


1. Enable Drive’s offline features

Google Drive can work offline, but you have to activate the feature first: click the cog icon on the front page of Drive, then choose Settings. On the General tab tick the box marked Sync… offline and Drive begins caching recent Docs, Sheets, Slides and Drawings to your computer. You can’t watch videos or open photos while you’re offline, but you can view, edit and create files in the native Google Drive formats when you don’t have connectivity.


2. Search inside PDFs and images

Did you know Google Drive will scan through the text in PDFs and images and make it fully searchable? Just upload a photo of a clearly typed PDF and try it. You can even open up and edit these files too: right-click on a PDF or image, then choose Open with and Google Docs. Depending on the quality of the file and how legible the text is, you might not get perfect results every time, but it’s a useful option to have ready for scanned documents.


3. Find your files more easily

Google’s pretty good at search and so you would expect there to be plenty of advanced search features available in Drive—click the drop-down arrow next to the search box to see some of them. Use “owner:[email protected]” to find documents shared by a certain someone, or “before:yyyy-mm-dd” or “after:yyyy-mm-dd” to restrict your search by date. Add “title:searchterms” to search document titles rather than the whole text of each one.


4. Scan images in a snap

If you’ve installed the Google Drive app for Android then you can use your phone as a portable scanner (the feature hasn’t yet arrived on iOS alas). From the front screen of the app, tap the large plus icon, then choose Scan from the pop-up menu. You can rotate and crop images manually (though the automatic detection works pretty well), plus create multipage documents, and your scans are instantly uploaded to Google Drive as PDFs.


5. Take your files back in time

Drive keeps older versions of your files just in case you want to go back to them (very handy if you’re working on documents with other people). For a native Drive file, open it and choose File then See revision history; for any other type of file, right-click on it in the document list and pick Manage versions. The pop-up menu to the side of each version lets you download, delete, and permanently keep files past the standard 30-day window.


6. Dictate documents with your voice

Typing has been around for a long, long time but it’s not the only option for creating documents—you can also dictate them using your voice, and you might find it’s faster for you. Inside a document select Tools, then Voice typing, then click the microphone and you’re away: right-click underlined words to see alternatives if you need to. Various voice commands like “italics”, “go to the end of the line” or “question mark” work as well.


7. Find files with Google Now

Here’s another tip for finding files in your Google Drive account: enlist the help of the Google Now digital assistant to do it. Launch the Google app voice search (note this only works on Android for now) then say “search Drive for” followed by your request—you can only look for specific search terms rather than anything advanced, but it’s a useful option nevertheless. Tap the back arrow (top left) to go to the main Google Drive interface.


8. See the biggest files in your Google Drive

Looking to free up some space in Google Drive to avoid going over your limit? It’s easy to do—from the front screen of the web app, click the link on the left that tells you how much space you’re using, then click the small Drive entry (or just go straight to this link). The biggest files are at the top and you can click the Quota used heading to see the smallest ones instead (remember native Drive files don’t count towards your storage quota).


9. Add links between documents

You’re probably already familiar with adding links to external sites from inside your documents but you can also link between various Google Drive files as well—very handy for research articles and the like. Select Insert then Link as normal, then type out a search term or two to find matching documents from your Google Drive account. If you prefer you can copy the URL at the top of any of your Drive files and paste it into the link field.


10. Sync to and from the desktop

Install the Google Drive desktop client for Mac or Windows and you get access to all of your files on your local computer too (you can pick and choose which folders get synced). Not only does it make it super simple to upload folders and files (simply copy them into the Drive folder), it also gives you offline access to any of your files you might need on the go—and changes are automatically synced back to the cloud when you get back online.

Azure Security Center Generally Available

The content below is taken from the original (Azure Security Center Generally Available), to continue reading please visit the site. Remember to respect the Author & Copyright.

Security Hero

Security Hero

Microsoft has announced the general availability of Azure Security Center, a centralized solution for monitoring the security of your Azure deployment.

What is Azure Security Center?

Microsoft announced Azure Security Center at their online event, AzureCon 2015, and launched a public preview on December 2nd, 2015. Security Center is a part of Microsoft’s vision for enterprise security, recognizing that the effectiveness of old methods based on independent solutions, such as a firewall and antivirus, were not enough to protect a business against today’s attacks.

Azure Security Center collects data from your deployment in Azure, including the fabric, the Azure resources that you have deployed, and even third-party solutions such as application gateways or next generation firewalls. The goal is to provide a unified view of the security status of your network. Imagine this scenario:

  • A database server is experiencing an unusually large amount of activity from a remote login.
  • The firewall is showing a large amount of data being sent from the database server to an IP address in Asia.

The firewall is configured to allow outbound data, so there’s nothing wrong there. The database server has been configured to allow remote logins, and bursts of activity aren’t unusual. So malware scanning, the database, and the firewall see nothing wrong. But you have put the pieces together, and realized that there’s probably an attack in progress via a compromised identity, and the attacker is downloading the database to an IP address in Asia. This is the sort of attack that Azure Security Center will recognize because it sees the whole picture in your deployment, and more.

Azure Security Center overview [Image Credit: Aidan Finn]

Azure Security Center overview [Image Credit: Aidan Finn]

Powered by Azure Machine Learning, Azure Security Center understands what is going on not just in your subscription, but also in all other monitored subscriptions, the Azure fabric, and reportedly, in all of Microsoft. This gives Azure Security Center a great understanding of attacks. If a seemingly harmless pattern has been seen before and determined to be an attack, Azure Security Center can warn you. Azure Security Center does more than just monitoring.

For those of you that are inclined, you can explore your Azure Security Center data using two different Power BI dashboards, which will require additional per-user licensing for Power BI.

Virtual Machine Support

At this time, Azure Security Center is focused on virtual machines, but support for cloud services (Classic or ASM deployments) and SQL databases will be added in the future. The following guest operating systems are supported at this time (from the FAQ):

  • Windows Server 2008 R2
  • Windows Server 2012
  • Windows Server 2012 R2
  • Ubuntu versions 12.04, 14.04, 15.10, 16.04
  • Debian versions 7, 8
  • CentOS versions 6.*, 7.*
  • Red Hat Enterprise Linux (RHEL) versions 6.*, 7.*
  • SUSE Linux Enterprise Server (SLES) versions 11.*, 12.*

Recommendations

Microsoft has a lot of security best practices for Azure, operating systems, and applications. Azure Security Center gathers information and analyzes it against these best practices. A set of recommendations is made, based on what Azure Security Center finds. For example, by default, it will recommend that you deploy a firewall in a virtual appliance (in a DMZ). In the example below, a recommendation has found an issue with a network security group configuration.

Azure Security Center recommendation about a network security group [Image Credit: Aidan Finn]

Azure Security Center recommendation about a network security group [Image Credit: Aidan Finn]

Security Policy

There is a high-level mechanism for controlling which recommendations will be offered from Azure Security Center. For example, I might have decided that I was not going to deploy a security virtual appliance, and rely just on NAT rules and network security groups. I can disable recommendations for web application firewalls and next generation firewalls in Security Policy.

Configuring recommendation policies in Azure Security Center [Image Credit: Aidan Finn]

Configuring recommendation policies in Azure Security Center [Image Credit: Aidan Finn]

This mechanism allows you to control noisy alerts. Not all of your deployments in a subscription will require the same policies; you can set a global policy, affecting all resource groups by inheritance, or you can create a policy for individual resource groups.

Email Alerts

As with all monitoring, I would not expect someone to sit there all day looking at and refreshing the Azure Security Center blades in the Azure Portal – although that’s how some stuck-in-the-1990s IT managers think. We should always manage by exception; that means we need a way to receive alerts when Azure Security Center detect an anomaly.

Sponsored

You can use the below screen to configure an email address (use a distribution group or a ticketing system) and a phone number (a help desk or similar) so that Microsoft can contact you if they find an attack. By default, emails about high severity alerts are disabled, but you can enable them.

Email alerts from Azure Security Center [Image Credit: Aidan Finn]

Email alerts from Azure Security Center [Image Credit: Aidan Finn]

Examples of alerts that you might receive are:

  • A known malicious IP address communicating with your virtual machines.
  • Brute force attacks.
  • Security alerts from a partner security solution that can integrate with Azure Security Center.

Pricing

As with the Operations Management Suite (OMS), Microsoft has gone with a freemium model with Azure Security Center. There are two types of charge. The first element is storage consumed, which is charged for even during a free trial of Azure Security Center. I have not yet found what kind of storage that Azure Security Center consumes, but I suspect that it is blob storage.

There are two Azure Security Center plans:

  • Free: This gives you a basic solution including security policy and recommendations, integration with partner solutions, and basic alerting.
  • Standard: This offering adds advanced threat detection (the really cool stuff) to the free plan.

There is also a 90-day free trial of the Standard plan, which will automatically transition to the paid-for Standard plan at the end of the trial.

Azure Security Center pricing [Image Credit: Microsoft]

Azure Security Center pricing [Image Credit: Microsoft]

Sponsored

The Standard plan charges monthly for each monitored/managed node; so the question is, what is a node? That depends. Right now, only virtual machines are monitored, and each machine counts as one node. So if I monitor 10 virtual machines for 1 month, then I will be charged €126.50 for monitoring those machines – the price is pro-rated on a daily basis.

The post Azure Security Center Generally Available appeared first on Petri.

Modular Moto Z Android phone supports DIY and RPi HAT add-ons

The content below is taken from the original (Modular Moto Z Android phone supports DIY and RPi HAT add-ons), to continue reading please visit the site. Remember to respect the Author & Copyright.

Motorola and Element14 have launched a development kit for creating add-on modules for the new modular Moto Z smartphone, including an adapter for RPi HATs. We don’t usually cover smartphones here at HackerBoards because most don’t offer much opportunity for hardware hacking. Yet, Lenovo’s Motorola Mobility subsidiary has spiced up the smartphone space this week […]

HP awarded £1.95m in reseller grey market fraud case

The content below is taken from the original (HP awarded £1.95m in reseller grey market fraud case), to continue reading please visit the site. Remember to respect the Author & Copyright.

Hewlett Packard Enterprise was this week awarded £1.95m after a UK High Court judge ruled against reseller minnow International Computer Purchasing over allegations it abused special bid pricing.

The US giant told us the successful claim against ICP and director Matthew Archer was for “fraud, conspiracy and inducement of breach of contract”.

The case was heard in April and judgement was passed on 26 July.

Specifically, the court case related to “abuse” of HPE’s “partner programme”, with ICP alleged to have bagged more than £1.5m in discounts illegally.

“HPE is satisfied with the verdict,” said Marc Waters, acting UK and Ireland boss. “Grey marketing is a serious problem for the industry in terms of lost sales, margin erosion, poor customer experiences and reputational damage.”

He said HPE has a “grey market avoidance programme” in operation and the “outcome of this case clearly demonstrates that we will not hesitate to take court action to enforce our rights”.

A spokesman for ICP and Archer sent us a statement. “We are extremely disappointed and surprised by the outcome. We strongly deny our wrongdoing and must now consider all options available to us.”

Special bid pricing is a mechanism to provide extra-sharp discounts to resellers in strategic customer accounts, but it is open to misuse. HPE is right to crack down on this but is seemingly doing so after falling profits.

Products that are heavily discounted under special bids are supposed to go to designated customers and HPE is understood to be auditing its books in the UK to identify those flouting the Ts&Cs.

One source told us “HPE is digging hard” to uncover technical abuse. “This is not just brokers but resellers are also finding ways to manipulate the system”. We have heard that many of the top resellers employ someone to maximise rebates and get the most out of special bids. ®

UberCENTRAL lets businesses request and pay for customer rides

The content below is taken from the original (UberCENTRAL lets businesses request and pay for customer rides), to continue reading please visit the site. Remember to respect the Author & Copyright.

You can lead a horse to water and hope they’ll buy something when they’re there – or at least, that’s how I think the expression goes after learning that Uber is launching a new program called UberCENTRAL to let businesses of any size request, pay for and manage rides for customers from a centralized dashboard.

UberCENTRAL is a project that manages to target both low- and high-income markets at once. As Uber notes in a blog post, it could just as easily work for a business whose core customers are unlikely to have smartphones, like senior citizens in Atlanta, as it could while providing a door-to-door white glove valet experience for upscale Bloomingdale shoppers.

It’s also ideal for more traditional uses, like arranging airport service for hotel guests or providing rides for clients of doctor or dental clinics undergoing outpatient procedures. And as I alluded to at the outset, businesses with particularly high margin goods and services could use this to bring people in the door, rather than just for sending them on their way post-purchase.

central_ipad_1462x1080Long-term, UberCENTRAL plants a seed that could grow into something much bigger. A centralized dispatch app that’s accessible to business customers is likely to be a key ingredient in the successful deployment of a network of on-demand driverless vehicles, something that Uber is publicly working towards. With the help of technologies like changeable interactive in-vehicle displays, you could also foresee a time when businesses could effectively brand an on-demand fleet, adding to the overall customer effect.

UberCENTRAL is available to businesses in the U.S. and Canada starting today, with expansion elsewhere likely to follow later (Uber invites businesses in other markets to sign up for expansion updates). It seems like a smart way for businesses to quickly add value to their customers’ overall experience, which could be an important distinction in a retail environment when the temptation to stay home and shop online is strong.

The most influential game developers of all time

The content below is taken from the original (The most influential game developers of all time), to continue reading please visit the site. Remember to respect the Author & Copyright.

The Most Influential Game Developers
You’ve played their games, now learn their names.

AWS Application Discovery Service Update – Agentless Discovery for VMware

The content below is taken from the original (AWS Application Discovery Service Update – Agentless Discovery for VMware), to continue reading please visit the site. Remember to respect the Author & Copyright.

As I wrote earlier this year, AWS Application Discovery Service is designed to help you to dig in to your existing environment, identify what’s going on, and provide you with the information and visibility that you need to have in order to successfully migrate your systems and applications to the cloud (see my post, New – AWS Application Discovery Service – Plan Your Cloud Migration, for more information).

The discovery process described in my blog post makes use of a small, lightweight agent that runs on each existing host. The agent quietly and unobtrusively collects relevant system information, stores it locally for review, and then uploads it to Application Discovery Service across a secure connection on port 443. The information is processed, correlated, and stored in an encrypted repository that is protected by AWS Key Management Service (KMS).

In virtualized environments, installing the agent on each guest operating system may be impractical for logistical or other reasons. Although the agent runs on a fairly broad spectrum of Windows releases and Linux distributions, there’s always a chance that you still have older releases of Windows or exotic distributions of Linux in the mix.

New Agentless Discovery
In order to bring the benefits of AWS Application Discovery Service to even more AWS customers, we are introducing a new, agentless discovery option today.

If you have virtual machines (VMs) that are running in the VMware vCenter environment, you can use this new option to collect relevant system information without installing an agent on each guest. Instead, you load an on-premises appliance into vCenter and allow it to discover the guest VMs therein.

The vCenter appliance captures system performance information and resource utilization for each VM, regardless of what operating system is in use. However, it cannot “look inside” of the VM and as such cannot figure out what software is installed or what network dependencies exist. If you need to take a closer look at some of your existing VMs in order to plan your migration, you can install the Application Discovery agent on an as-needed basis.

Like the agent-based model, agentless discovery gathers information and stores it locally so that you can review it before it is sent to Application Discovery Service.

After the information has been uploaded, you can explore it using the AWS Command Line Interface (CLI). For example, you can use the describe-configurations command to learn more about the configuration of a particular guest:

You can also export the discovered data in CSV form and then use it to plan your migration. To learn more about this feature, read about the export-configurations command.

Getting Started with Agentless Discovery
To get started, sign up here and we’ll provide you with a link to an installer for the vCenter appliance.

Jeff;

 

Server vendor has special help desk for lying, incompetent sysadmins

The content below is taken from the original (Server vendor has special help desk for lying, incompetent sysadmins), to continue reading please visit the site. Remember to respect the Author & Copyright.

On-Call Welcome again to On-Call, our festive Friday frolic through readers’ recollections of jobs gone bad.

This week, something a little different from reader “DB” who says “I do server hardware warranty support for a known enterprise server vendor.”

DB’s been at it for 20 years and says he’s spent his career working on PCs, handhelds, networks and servers.

“I fix things,” says DB. “That’s what I do.”

Over 20 years DB has seen plenty. A particular low light came in the 1990s, when he had to deal with newly-minted Microsoft Certified Support Engineers “who hadn’t ever touched a piece of hardware, solved a real problem, or provided a real solution to any problem.”

DB labels those folks “Memorization geeks who’d passed tests, and gave the appearance of being experts when in fact, they didn’t know how to do anything.”

These days he thinks hell desk enemy number one is “useless System Administrators who create config problems when they make config mistakes and try to cover it by claiming hardware failure.”

“They resist sending logs but the diagnostic logs always reveal that no hardware has failed and that it is in fact, config.”

DB says the company he works for now has a whole department who handles calls like that. ¸ “A lot of useless sysadmins get transferred to the ‘How To’ team that I don’t work on,” he says, insisting he would need “a significant bump up in pay to add that to my current duties.”

He’s also reserving a place on an ice moon prison for callers who appear to work for third party support providers and just won’t do anything other than insist on a visit from a tech. He suspects that’s because those support providers don’t actually have feet on the street everywhere they promise to do so and aren’t capable of doing any meaningful support remotely.

DB’s tale makes for a rather different On-Call that we’re keen to explore. So if you work on a hell desk, or at a vendor in another capacity, or just have other tales of jobs gone pear-shaped, write to me and you could find yourself in a future edition of this crazy continuing column. ®

Sponsored:
Accelerated Computing and the Democratization of Supercomputing

Uber is making it easier for companies to offer free rides

The content below is taken from the original (Uber is making it easier for companies to offer free rides), to continue reading please visit the site. Remember to respect the Author & Copyright.

Uber is making it simpler for businesses to offer transportation for their customers by offering UberCENTRAL, a new dashboard that allows businesses to request, manage and pay for Uber rides for their patrons.

UberCENTRAL will work across any tablet or browser and is available today. Businesses can make multiple requests for rides from one account, and trip details are sent via SMS rather than an app, so those without smartphones can still participate in the program. Using the app, business owners will be able to track rides and locations as well as billing from one centralized hub.

The app is free to use, and it sounds as though it’s going to make things a whole lot simpler for customers who don’t actually want to sign up for Uber and use the service (or share credit card details). Ordering a ride to an attraction from a hotel or setting up cars to cruise over to the store is a lot less complicated when the business is taking care of all the specifics, especially the price.

Source: Uber

Virgin America’s app has Spotify playlists based on your trip

The content below is taken from the original (Virgin America’s app has Spotify playlists based on your trip), to continue reading please visit the site. Remember to respect the Author & Copyright.

Virgin America revealed a major overhaul to its website back in 2014, and now it finally has an app for Android and iOS. As you might expect, the retooled mobile software has a similar look and feel to the web portal, but you can use it to book flights, manage upgrades and access boarding passes on the go. There’s a lot more playful illustration than you’ve seen in other airline apps, consistent with the approach Virgin takes to air travel. What’s more, there’s Spotify integration as well, offering an easy way to play music during your trip.

In fact, Virgin America is calling the partnership a "first-of-its kind trip soundtrack mobile feature on an airline app." How does it work? Well, once you check in, you can stream one of Spotify’s "Mood Lists" that are inspired by cities around the world. Users will be privy to a playlist that’s based on their destination, so in theory you’ll get a new mix of songs for each leg of your journey. If that sounds familiar, the streaming service recently revealed an Out of Office playlist tool that also compiles a collection of tracks inspired by where you’re traveling that can be used in those automatic email responses. The collaboration isn’t too surprising though, since flyers can already stream music from Spotify during Virgin flights.

While the new Virgin America app isn’t ready for the masses, select Elevate members and other frequent flyers will be privy to a beta test "in the coming weeks." If you didn’t get an invite to the test phase, you can sign up here to try and get in. Don’t mind waiting a little longer? The airline says both the Android and iOS versions of the app are slated to launch "later this summer."

Source: Virgin America

Embedded Linux Conference Europe schedule on tap

The content below is taken from the original (Embedded Linux Conference Europe schedule on tap), to continue reading please visit the site. Remember to respect the Author & Copyright.

The schedule for the Oct. 11-13 ELC Europe and OpenIoT Summit in Berlin has been posted, with co-located events on Yocto, RTL, tracing, and OpenWrt. Last year, the Embedded Linux Conference Europe (ELCE) in Dublin was co-located with the European versions of LinuxCon and CloudOpen, but this year it comes a week later. LinuxCon Europe […]

White boxes are now ready for prime time

The content below is taken from the original (White boxes are now ready for prime time), to continue reading please visit the site. Remember to respect the Author & Copyright.

White box switches have been around for years, but adoption has been limited to niche companies that have large engineering departments. The rise of software-defined networking (SDN) has brought them into the public eye, though, as a lower-cost alternative to traditional network hardware. In fact, some of the early messaging around SDN revolved around using white boxes as a complete replacement for all network hardware.

Despite the promise that SDN brought, the use of white boxes has been limited for a couple of reasons. The first is that historically, any organization that wanted to leverage a white box switch needed to have a number of technical specialists that many enterprises do not have. This would include network programmers and engineers fluent in Linux. These skills are commonly found in companies such as Facebook, Google and Amazon, but not so much in your average enterprise.

The other reason is that the operational costs of running a white box could be high. While the price point of the individual boxes is low, the cost of hiring programmers, support people and other staff drives the operational costs up. It’s hard to justify lower hardware costs at the expense of an increase in operational costs.

However, much has changed in the past few years, and white boxes have come a long way from where they were. I believe they are now ready for a broader range of companies to use.  I’m sure many of you reading this will be skeptical, as you may have taken a look at white boxes in the past and decided they weren’t for you. But if you do a little bit of due diligence, you can see the white boxes have improved significantly in the following areas:

  • Cost and reliability. It may seem odd to put cost and reliability in the same bucket, but they do go together because we generally believe things that cost more are also more reliable.

    The reality is that the silicon and other hardware are often sourced from the same companies that mainstream hardware vendors use. What the customer is often paying for is the software that rides on top of the hardware and the logo. From a reliability standpoint, white boxes are on par with brand-name systems because they are actually the same hardware. 

  • Features and capabilities. The question for buyers with respect to white box features is, “Are you compromising the features you need to run a network?” To answer that, one must understand the features required for the role white boxes play. There’s no question that white boxes are not at feature parity with layer 2/3 switches for uses such as campus switching or aggregation.

    However, that doesn’t really matter because no one would buy a white box for that. White boxes typically are used as a top-of-rack switch and/or as part of an SDN deployment, and white boxes are at feature parity for those use cases. They support industry standards such as OpenFlow, are highly programmable and work with orchestration tools such as Ansible, Chef and Puppet. 

    Also, white boxes tend to have strong telemetry capabilities and are more open so network administrators can get whatever information they need, when they require it for whatever purpose. In fact, in this area, it’s fair to say white boxes are often superior to traditional layer 2/3 switches. 

    Think of a traditional switch as a Swiss Army knife with a number of features that you may or may not use. A white box is more like a hunting knife where it has no spoon and fork but is optimized for one specific task. A white box is the SDN equivalent of that and has capabilities that are geared towards new workflows and where the SDN industry is headed.

  • Network operations. Here is where white boxes have taken a hit mostly because there’s a fear of the unknown. Most engineers, even novice ones, understand the process of pulling a Cisco switch out of the box and getting it up and running. With white boxes, there’s much more uncertainty, and questions pop into network managers’ heads that make them skittish—things like “Do I have to write my own operating system?” “How do I install a network operating system” “What do I buy?” “What are all the steps involved?” “Who will provide me support?”

Those are all valid concerns, and I understand why it would give someone the heebie-jeebies. The lower cost and flexibility is great, but if it’s harder to get up and running and manage, then all the benefit you hoped to get would be wiped out because of the complexity.

But times have changed, and network engineers should no longer despair because white boxes can now be purchased from mainstream networks vendors such as Dell and HP. Also, these will come shipped with tried-and-true network operating systems from vendors such as Pica8 and Cumulus.

Also, when one purchases a white box from a vendor like HP and Dell, those suppliers will offer the kind of technical support most engineers need. So, if the organization is used to procuring a solution from a traditional supplier that includes hardware, software and support, the experience for a white box will be almost identical. Lastly, technical expertise around white box has grown. The number of operators that are familiar with ONIE and ODM white box vendors is much greater. This has increased the learning and best practices, making the white box universe much larger—and it’s growing daily.

From the interviews I have done with mainstream companies, I know there is tremendous interest in white boxes. Despite the maturity of the products, there is a fair amount of trepidation around these products. It’s my opinion that white boxes are certainly ready for mainstream adoption. Obviously they aren’t for every use case, but in the right situation, like an SDN deployment, they can be as good or better than traditional switches with a much lower price point and equivalent operational costs.

My recommendation is that instead of fearing the unknown, take one out of a test drive and see for yourself whether it meets your needs.

Microsoft gives Office 365 a major upgrade

The content below is taken from the original (Microsoft gives Office 365 a major upgrade), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft has announced a number of features to be added to Office 365 users as part of the July 2016 update that are a group of “cloud-powered intelligent services” designed to save time and improve productivity for users of Word, PowerPoint and Outlook. The news came in an Office blog post by Kirk Koenigsbauer, corporate vice president for the Office team.

New to Word

Word is getting two significant new features, called Researcher and Editor. As its name implies, Researcher is designed to help the user find reliable sources of information by using the Bing Knowledge Graph to search for sources, and it will properly cite them in the Word document.

Microsoft will expand Researcher’s body of reference materials to also include sources such as national science and health centers, well-known encyclopedias, history databases and more. It will also bring Researcher to mobile devices so you are not limited to just your PC. 

+ Also on Network World: How to get and set up all the Office 365 components +

If Researcher helps you start your paper, Editor helps you to finish it. This new feature builds on the already-existing spellchecker and thesaurus to offer suggestions on how to improve your overall writing. In addition to the wavy red line under a misspelled word and the wavy blue line under bad grammar, there will be a gold line for writing style. The initial writing style covered will be for clarity, but Microsoft promises that Editor will get better over time.

PowerPoint Features

As for PowerPoint, Microsoft already launched two new features, Designer and Morph, last November that allowed users to add some flair to their presentations. Now the company is offering Zoom, a feature that lets you easily create “interactive, non-linear presentations. 

Instead of the 1-2-3-4 linear method of presenting slides, forcing you to place them all in the order you wish to display, presenters will be able to show their slides in any order they want at any time. This way you can change your presentation order as needed without having to stop PowerPoint or interrupt the display.

Outlook updates

Finally, there are some updates to Outlook. Office 365 is finally getting Focused Inbox, which has been available on Outlook for iOS and Android for some time. Focused Inbox separates your inbox into two tabs: one for the email that matter most to you and one for everything else. High-priority emails are in the “Focused” tab, while the rest are in the “Other” tab. As you move email in or out of the Focused tab, Outlook learns from your behavior to adjust to your priorities. 

Outlook 365 and Outlook for PC and Mac are all getting @mentions, making it easy to identify emails that need your attention, as well as flag actions for others. To flag someone, just type the @ symbol in the body of the email and pick the desired person. Their name will automatically be highlighted in the email and their email address automatically added to the To: line. If you are mentioned, the @ symbol will show up in Outlook, and you can filter to quickly find all emails where you are mentioned.

Microsoft didn’t get a release schedule for these features, but count on them coming shortly.

AR in Mercedes-Benz’s Rescue Assist app gives first responders an inside look

The content below is taken from the original (AR in Mercedes-Benz’s Rescue Assist app gives first responders an inside look), to continue reading please visit the site. Remember to respect the Author & Copyright.

Mercedez-Benz has been putting QR codes on the the B-pillars and inside the fuel door of new cars since November 2013, and those have provided a way for first responders and emergency personnel to quickly get detailed model info about any Mercedes-Benz vehicle involved in an accident using the Rescue Assist mobile app. Now, an update brings 3D imagery, as well as augmented reality, to the existing app, letting people involved in rescue operation get an even better overall picture of the situation when an accident happens.

Through the new AR features, emergency personnel can see color-coded representations of internal components, including key areas to be wary of when doing things like cutting through vehicles to free trapped passengers. The app will provide insight into where things like fuel lines, batteries and other electrical components are located, in order to help reduce the risk of further damage or injury that arises when a car needs to be unconventionally dismantled in order to save lives.

The Rescue Assist app will also still provide resources including rescue cards, which provide an overview of relevant safety info specific to each particular model (which includes not only Mercedes-Benz consumer cars and vans, but also some Fuso-branded commercial vehicles.

This is the kind of AR use case that led a lot of people to think Google Glass had potential as a tool in specific industry verticals, emergency response among the most often cited. Baking it into an existing app for use with the smartphones that rescue personnel are likely to have on them anyway is probably a much better application of the tech, even if it isn’t hands-free.

Incremental backups on Microsoft Azure Backup: Save on long term storage

The content below is taken from the original (Incremental backups on Microsoft Azure Backup: Save on long term storage), to continue reading please visit the site. Remember to respect the Author & Copyright.

Backup industry is migrating from tapes towards disks and cloud, making it more feasible to leverage technology to achieve efficient utilization of network, storage, and human resources. This results in benefits such as lesser capital expense, faster recovery times and higher data security. Here we will discuss how Incremental Backups, a technology leveraged by Microsoft Azure Backup, delivers greater network and storage savings.

Kinds of backup

The different kinds of backup vary in terms of storage consumption, time to recover (RTO) and network consumption. Hence, it is imperative that you choose right backup solution to keep the overall backup TCO low. This section details out various kinds of backup and their impact on overall cost. As an example, let us take a data source, A, made up of blocks A1, A2, … A10, which needs to be backed up monthly. Block A2, A3, A4, A9 change in the first month, and A5 changes the next month.

Assume A is made up of blocks A1, A2, ... A10. The first month, A3, A4, A9 change. The following month, A5 changes. Incremental Backups end up using the least storage space.

In Full Backups, a copy of the whole data source is stored for every backup.This leads to high network and storage consumption due to transfer of full copies every time.

Differential backups store only the blocks that have changed since the initial full backup, resulting in lesser network and storage consumption. Hence, after the initial copy, only the blocks that have changed since then are backed up for every subsequent backup. This avoids redundant copies of unchanged data, but is inefficient since the blocks that remain unchanged between further backups are transferred and stored as well.

Hence, after the first month, the changed blocks A2, A3, A4 and A9 will be backed up, and the following month, they will be backed up again, along with changed block, A5. These blocks will continue to be backed up until the next full backup is taken. Taking a full backup is a way to keep the size of the differential backups in check.

This used to be beneficial earlier, when manual intervention was required to restore from tapes: it needed lesser storage than full backup, and required merging just two tapes to create a recovery point. However, since backups can now be managed through a single console, storing unchanged blocks is inefficient. This is addressed with Incremental Backups, using which Microsoft Azure Backup creates recovery points.

Incremental Backups achieve high storage and network efficiency by storing only the blocks that change since the previous backup. This also negates any need to take regular full backups as was needed in Differential backups. Therefore, after the initial full backup, Microsoft Azure Backup will mark A2, A3, A4 and A9 as changed and send them to be stored. The following month, only the changed block, A5 will be transferred and stored. This leads to storage and network savings due to lesser data being transferred. With copies of lesser data being maintained for a long period, the TCO of backup decreases considerably.

With the technology used at Microsoft, even with Incremental Backups, the restores take constant time: no matter which recovery point you restore, they would all happen equally fast.

Microsoft Azure Backup leverages Incremental Backup technology, providing you secure, pay-as-you-go, highly scalable services to suit different requirements. In addition to Incremental Backups, these products also use compression, network throttling and offline seeding to further optimize resource consumption. All this is done while maintaining full copy fidelity for recovery, thus ensuring that even if a recovery point is corrupted, other recovery points are not affected.

Get started

Some resources to help you get started include:

In the coming blog post, we will discuss more about how Microsoft Azure Backup gives you full fidelity backups and keeps RTOs constant with incremental backups.