HP made a VR backpack for on-the-job training

The content below is taken from the original (HP made a VR backpack for on-the-job training), to continue reading please visit the site. Remember to respect the Author & Copyright.

To date, VR backpack PCs have been aimed at gamers who just don’t want to trip over cords while they’re fending off baddies. But what about pros who want to collaborate, or soldiers who want to train on a virtual battlefield? HP thinks it has a fix. It’s launching the Z VR Backpack, a spin on the Omen backpack concept that targets the pro crowd. It’s not as ostentatious as the Omen, for a start, but the big deal is its suitability to the rigors of work. The backpack is rugged enough to meet military-grade drop, dust and water resistance standards, and it uses business-class hardware that includes a vPro-enabled quad Core i7 and Quadro P5200 graphics with a hefty 16GB of video memory.

The wearable computer has tight integration with the HTC Vive Business Edition, but HP stresses that you’re not obligated to use it — it’ll work just fine with an Oculus Rift or whatever else your company prefers. The pro parts do hike the price, though, as you’ll be spending at least $3,299 on the Z VR Backpack when it arrives in September. Not that cost is necessarily as much of an issue here — that money might be trivial compared to the cost of a design studio or a training environment.

There’s even a project in the works to showcase what’s possible. HP is partnering with a slew of companies (Autodesk, Epic Games, Fusion, HTC, Launch Forth and Technicolor) on a Mars Home Planet project that uses VR for around-the-world collaboration. Teams will use Autodesk tools to create infrastructure for a million-strong simulated Mars colony, ranging from whole buildings to pieces of clothing. The hope is that VR will give you a better sense of what it’d be like to live on Mars, and help test concepts more effectively than you would staring at a screen. You can sign up for the first phase of the project today.

Source: HP (1), (2)

Google just made scheduling work meetings a little easier

The content below is taken from the original (Google just made scheduling work meetings a little easier), to continue reading please visit the site. Remember to respect the Author & Copyright.

There’s a little bit of good news for people juggling both Google G Suite tools and Microsoft Exchange for their schedule management at work. Google has released an update that will allow G Suite users to access coworkers’ real-time free/busy information through both Google Calendar’s Find a Time feature and Microsoft Outlook’s Scheduling Assistant interchangeably.

G Suite admins can enable the new Calendar Interop management feature through the Settings for Calendar option in the admin console. Admins will also be able to easily pinpoint issues with the setup via a troubleshooting tool, which will also provide suggestions for resolving those issues, and can track interoperability successes and failures for each user through logs Google has made available.

The new feature is available on Android, iOS and web versions of Google Calendar as well as desktop, mobile and web clients for Outlook 2010+, for admins who choose to enable it. Google says the full rollout should be completed within three days.

Via: TechCrunch

Source: Google (1), (2)

Microsoft Teams – explainer video

Understand the multicloud management trade-off

The content below is taken from the original (Understand the multicloud management trade-off), to continue reading please visit the site. Remember to respect the Author & Copyright.

One of the trends I’ve been seeing for a while is the use of multiple clouds or multicloud. This typically means having two or three public clouds in the mix that are leveraged at the same time. Sometimes you’re mixing private clouds and traditional systems as well.

In some cases even applications and data span two or more public clouds, looking to mix and match cloud services. Why? Enterprises are seeking to leverage the best and most cost-effective cloud services, and sometimes that means picking and choosing from different cloud providers.

[ To the cloud! Real-world container migrations. | Dig into the the red-hot open source framework in InfoWorld’s beginner’s guide to Docker. ]

In order to make multicloud work best for an enterprise you need to place a multicloud management tool, such as a CMP (cloud management platform) or a CSB (cloud services broker) between you and the plural clouds. This spares you from having to deal with the complexities of the native cloud services from each cloud provider.

Instead you deal with an abstraction layer, sometimes called a “single pane of glass” where you are able to leverage a single user interface and sometimes a single set of APIs to perform common tasks among the cloud providers you’re leveraging. Tasks may include provisioning storage or compute, auto-scaling, data movement, etc.   

While many consider this a needed approach when dealing with complex multicloud solutions, there are some looming issues. The abstraction layers seem to have a trade-off when it comes to cloud service utilization. By not utilizing the native interfaces from each cloud provider you’re in essence not accessing the true power of the cloud provider, but instead just leveraging a subset of the services. 

Case in point: cloud storage. Say you’re provisioning storage through a CMP or CSB, and thus you’re leveraging an abstraction layer that has to use a least-common-denominator approach when managing the back-end cloud computing storage services. This means that you’re taking advantage of some storage services but not all. Although you do gain access to storage services that each cloud has in common, you may miss out on storage services that are specific to a cloud, such as advanced caching or systemic encryption.

The point here is that there is a trade-off. You can’t gain simplicity without sacrificing power. This may leave you with a much weaker solution than one that leverages all cloud-native features. No easy choices here.

RightScale Announces General Availability of Optima, a New Solution for Collaborative Cloud Cost Management and Optimization

The content below is taken from the original (RightScale Announces General Availability of Optima, a New Solution for Collaborative Cloud Cost Management and Optimization), to continue reading please visit the site. Remember to respect the Author & Copyright.

RightScale, Inc., a demonstrated leader in enterprise cloud management, today announced General Availability of cloud cost management solution RightScale Optima. RightScale Optima combines existing RightScale analysis, reporting, and forecasting functionality for AWS, Azure, Google Cloud Platform, IBM SoftLayer, and private clouds with newly developed collaborative optimization and automated actions to reduce wasted cloud spend.

“Along with our Cloud Gateway network capability, Telstra’s Cloud Management Platform powered by RightScale has already transformed how our customers govern and orchestrate their hybrid cloud deployments,” said Jim Fagan, Director Global Platforms, Telstra. “Through RightScale Optima, our customers now have the ability to monitor and manage their cloud usage and costs in a deeper and more predictive way. Any resulting cost savings can be re-invested into much needed activities that promote innovation and transformation for their business.”

“We see RightScale Optima as a potential difference maker for companies looking to reduce cloud spend,” said Edwin Yuen, analyst with Enterprise Strategy Group. “Optimizing cloud spend is not a linear process, with a defined start and a finish. Rather, it is an ongoing, iterative process that involves communication and collaboration between all stakeholders. RightScale, with its experience building their Cloud Management Platform, is uniquely qualified to both help companies develop their costing process and take action to reduce their cloud spend on an ongoing basis.”

There are four main components to RightScale Optima:

  • Collaborative optimization: RightScale is the first cloud management platform (CMP) to help various resource owners in an enterprise to collaborate to take action on changes to cloud spending.

  • Automated action: Targeting inefficient spending, RightScale Optima enables enterprises to take action on insights. The platform is designed to reduce the noise of inaccurate recommendations (a common frustration with some cloud cost management tools).

  • Budget and Forecasting: RightScale Optima helps enterprises forecast and predict cloud application costs, track budgets, and develop budget alerts for cost overruns. It also evaluates different clouds, instance types, and purchase options.

  • Analysis and reporting: RightScale Optima includes data from all public and private cloud providers and enables organizations to see usage and cost data in a single unified dashboard; view costs by cloud account, team, or application to understand usage trends; perform chargeback and showback; and leverage tags to allocate costs to departments or business units.

Benefits of RightScale Optima include:

  • Quick ROI: Get smart cost optimization recommendations across clouds and cloud accounts that identify instant savings opportunities for each resource and team across the enterprise.

  • Collaboration: Automatically deliver optimization recommendations to resource owners.

  • Noise reduction: Enable resource owners to tag recommendations that should be ignored and prevent future alerts.

  • Automated action: Enable users to take action on recommendations with ongoing policy-based automation.

“The RightScale 2017 State of the Cloud Survey of more than 1,000 IT professionals found that optimizing cloud costs is the top initiative among all cloud users,” said Michael Crandell, CEO of RightScale. “Despite an increased focus on cloud cost management, only a minority of companies are taking critical actions to optimize cloud costs, such as shutting down unused workloads or selecting lower-cost clouds or regions. We believe RightScale Optima is a major step forward for large enterprises looking to manage cloud spend.”

Toyota’s new solid-state battery could make its way to cars by 2020

The content below is taken from the original (Toyota’s new solid-state battery could make its way to cars by 2020), to continue reading please visit the site. Remember to respect the Author & Copyright.

Toyota is touting its progress on a new kind of battery technology, which uses a solid electrolyte instead of the conventional semi-liquid version used in today’s lithium-ion batteries. The car maker said that it’s near a breakthrough in production engineering that could help it put the new tech in production electric vehicles as early as 2020, according to the Wall Street Journal.

The improved battery technology would make it possible to create smaller, more lightweight lithium-ion batteries for use in EVs, that could also potentially boost the total charge capacity and result in longer-range vehicles.

Another improvement for this type of battery would be longer overall usable life, which would make it possible to both use the vehicles they’re installed in for longer, and add potential for product recycling and alternative post-vehicle life (some companies are already looking into putting EV batteries into use in home and commercial energy storage, for example).

Batteries remain a key limiting factor for electric vehicle design, because of how far tech companies focused on the problem have pushed existing science. The move to solid state would help make room for more gains in terms of charge capacity achieved in the footprint available in consumer vehicles, while helping to push further existing efficiencies achieved through things like the use of ultra-light materials in car frames and interiors.

Toyota isn’t saying yet where its batteries will end up, but any edge here is bound to be a big boon for automakers looking at a future that increasingly seems like it’ll be dominated by EVs.

Featured Image: TOSHIFUMI KITAMURA/AFP/Getty Images

New – GPU-Powered Streaming Instances for Amazon AppStream 2.0

The content below is taken from the original (New – GPU-Powered Streaming Instances for Amazon AppStream 2.0), to continue reading please visit the site. Remember to respect the Author & Copyright.

We launched Amazon AppStream 2.0 at re:Invent 2016. This application streaming service allows you to deliver Windows applications to a desktop browser.

AppStream 2.0 is fully managed and provides consistent, scalable performance by running applications on general purpose, compute optimized, and memory optimized streaming instances, with delivery via NICE DCV – a secure, high-fidelity streaming protocol. Our enterprise and public sector customers have started using AppStream 2.0 in place of legacy application streaming environments that are installed on-premises. They use AppStream 2.0 to deliver both commercial and line of business applications to a desktop browser. Our ISV customers are using AppStream 2.0 to move their applications to the cloud as-is, with no changes to their code. These customers focus on demos, workshops, and commercial SaaS subscriptions.

We are getting great feedback on AppStream 2.0 and have been adding new features very quickly (even by AWS standards). So far this year we have added an image builder, federated access via SAML 2.0, CloudWatch monitoring, Fleet Auto Scaling, Simple Network Setup, persistent storage for user files (backed by Amazon S3), support for VPC security groups, and built-in user management including web portals for users.

New GPU-Powered Streaming Instances
Many of our customers have told us that they want to use AppStream 2.0 to deliver specialized design, engineering, HPC, and media applications to their users. These applications are generally graphically intensive and are designed to run on expensive, high-end PCs in conjunction with a GPU (Graphics Processing Unit). Due to the hardware requirements of these applications, cost considerations have traditionally kept them out of situations where part-time or occasional access would otherwise make sense. Recently, another requirement has come to the forefront. These applications almost always need shared, read-write access to large amounts of sensitive data that is best stored, processed, and secured in the cloud. In order to meet the needs of these users and applications, we are launching two new types of streaming instances today:

Graphics Desktop – Based on the G2 instance type, Graphics Desktop instances are designed for desktop applications that use the CUDA, DirectX, or OpenGL for rendering. These instances are equipped with 15 GiB of memory and 8 vCPUs. You can select this instance family when you build an AppStream image or configure an AppStream fleet:

Graphics Pro – Based on the brand-new G3 instance type, Graphics Pro instances are designed for high-end, high-performance applications that can use the NVIDIA APIs and/or need access to large amounts of memory. These instances are available in three sizes, with 122 to 488 GiB of memory and 16 to 64 vCPUs. Again, you can select this instance family when you configure an AppStream fleet:

To learn more about how to launch, run, and scale a streaming application environment, read Scaling Your Desktop Application Streams with Amazon AppStream 2.0.

As I noted earlier, you can use either of these two instance types to build an AppStream image. This will allow you to test and fine tune your applications and to see the instances in action.

Streaming Instances in Action
We’ve been working with several customers during a private beta program for the new instance types. Here are a few stories (and some cool screen shots) to show you some of the applications that they are streaming via AppStream 2.0:

AVEVA is a world leading provider of engineering design and information management software solutions for the marine, power, plant, offshore and oil & gas industries. As part of their work on massive capital projects, their customers need to bring many groups of specialist engineers together to collaborate on the creation of digital assets. In order to support this requirement, AVEVA is building SaaS solutions that combine the streamed delivery of engineering applications with access to a scalable project data environment that is shared between engineers across the globe. The new instances will allow AVEVA to deliver their engineering design software in SaaS form while maximizing quality and performance. Here’s a screen shot of their Everything 3D app being streamed from AppStream:

Nissan, a Japanese multinational automobile manufacturer, trains its automotive specialists using 3D simulation software running on expensive graphics workstations. The training software, developed by The DiSti Corporation, allows its specialists to simulate maintenance processes by interacting with realistic 3D models of the vehicles they work on. AppStream 2.0’s new graphics capability now allows Nissan to deliver these training tools in real time, with up to date content, to a desktop browser running on low-cost commodity PCs. Their specialists can now interact with highly realistic renderings of a vehicle that allows them to train for and plan maintenance operations with higher efficiency.

Cornell University is an American private Ivy League and land-grant doctoral university located in Ithaca, New York. They deliver advanced 3D tools such as AutoDesk AutoCAD and Inventor to students and faculty to support their course work, teaching, and research. Until now, these tools could only be used on GPU-powered workstations in a lab or classroom. AppStream 2.0 allows them to deliver the applications to a web browser running on any desktop, where they run as if they were on a local workstation. Their users are no longer limited by available workstations in labs and classrooms, and can bring their own devices and have access to their course software. This increased flexibility also means that faculty members no longer need to take lab availability into account when they build course schedules. Here’s a copy of Autodesk Inventor Professional running on AppStream at Cornell:

Now Available
Both of the graphics streaming instance families are available in the US East (Northern Virginia), US West (Oregon), EU (Ireland), and Asia Pacific (Tokyo) Regions and you can start streaming from them today. Your applications must run in a Windows 2012 R2 environment, and can make use of DirectX, OpenGL, CUDA, OpenCL, and Vulkan.

With prices in the US East (Northern Virginia) Region starting at $0.50 per hour for Graphics Desktop instances and $2.05 per hour for Graphics Pro instances, you can now run your simulation, visualization, and HPC workloads in the AWS Cloud on an economical, pay-by-the-hour basis. You can also take advantage of fast, low-latency access to Amazon Elastic Compute Cloud (EC2), Amazon Simple Storage Service (S3), AWS Lambda, Amazon Redshift, and other AWS services to build processing workflows that handle pre- and post-processing of your data.

Jeff;

 

Making the Small Business Case for Windows 10 S

The content below is taken from the original (Making the Small Business Case for Windows 10 S), to continue reading please visit the site. Remember to respect the Author & Copyright.

Business-Presentation

Business-Presentation

Chromebooks have proven popular with startups and small businesses, especially those that have invested in Google G Suite. With the release of Windows 10 S, business users can get most of the benefits of Chrome OS. They can also get the Office desktop apps if they have an Office 365 subscription. In this Ask the Admin, I will look at how Windows 10 S stacks up for businesses that are either already using Chromebooks, or are trying to decide between Windows 10 and Chrome OS.

 

 

Windows has always had one major problem in the small business space. It is difficult to secure and manage without full-time IT support. Because of that, Chromebooks have become popular. They are maintenance-free and perform reliably until the hardware starts to give up. However, Chrome OS is limited in ways Windows is not. For instance, Chromebooks requires the Internet. Essentially, Chrome OS is Google Chrome browser nicely bundled as an operating system.

Reliance on the Internet might not be a problem as some websites have an offline function. Google G Suite apps, such as Docs and Mail, support offline use. This allows users to create and edit documents, search and compose email, and edit photos that are stored locally. Microsoft Word Online cannot be used offline so that might restrict use of Office 365 on a Chromebook. Although, Office 365 Outlook does have an offline mode.

Windows 10 S was announced alongside the new Surface Laptop at an event in New York in May. While the initial focus is on the education sector, Windows 10 S notebooks will be available for everyone to buy. As a challenger to the dominance of Chromebooks in education, Windows 10 S limits users to installing apps from the Windows Store and Microsoft Edge is the only supported browser.

Windows 10 S for Business

Windows 10 S matters because the restricted environment ensures that users do not have administrative access to the operating system, cannot install legacy win32 apps that play outside of a Universal Windows Platform (UWP) app sandbox, and browsers cannot be installed that might introduce vulnerabilities. Windows Store apps are automatically updated as necessary, removing another potential security issue.

In short, if a Chromebook was previously a consideration for your business, Windows 10 S would also fit the bill. For a similar price, Windows 10 S does everything a Chromebook can do and more. But most importantly, Windows 10 S does not suffer from the same security and stability issues as unmanaged Windows 10 Pro. This is because of the restricted OS access and curated app experience. Additionally, Microsoft’s antimalware solution, Windows Defender, is included free out-of-the-box.

Sponsored

If you currently use Google G Suite with no plans to switch to Office 365, it is worth noting that there are no official Google apps in the Windows Store. That is not likely to change any time soon. This means that access to Google Drive in Windows 10 S is only possible via Microsoft Edge. There is an official Dropbox UWP app, although it does not provide offline synchronization. The OneDrive and OneDrive for Business synchronization client is built into Windows 10 S, so there is always access to Office 365 files locally.

Management Features

Windows 10 S can join an Azure Active Directory (AD) domain but not Windows Server AD. Azure AD is the directory service used by Office 365, so the ability to join Windows 10 S to Azure AD can make using the cloud easier with features such as single sign-on.

If you need to manage Windows 10 S based devices, Microsoft Intune Mobile Device Management (MDM) can be used to configure security settings and manage updates, albeit without the fine-grained control offered by Active Directory Group Policy.

At the end of the day, if you decide that Windows Store apps do not meet your needs, Windows 10 S can be upgraded to Pro free for a limited period and then for a one-time fee of $49.

Fall Creators Update

But that is not all. Google has been attempting to bring Android apps to Chrome OS for some time but that effort has been delayed. It is probably not as easy as it looks. However, Microsoft is already there and the Windows Store is not limited to just UWP apps either. Some legacy win32 apps can be downloaded from Windows Store, including the Microsoft Office suite of desktop programs. This gives Windows 10 S a significant advantage of Chrome OS.

The Windows 10 Fall Creators Update is set to make the experience better with some long overdue features and improvements to how Windows is deployed, including:

  • Pin websites to the taskbar
  • Support for Progressive Web Apps (PWAs) in Microsoft Edge
  • On-Demand Sync for OneDrive and OneDrive for Business
  • Support for ARM64
  • Windows Autopilot for self-service deployment and device reset

Windows 10 S is a worthy alternative to Chrome OS, particularly as it is more capable for a similar price. There are few downsides though. Chrome OS has a more transparent update mechanism than Windows. According to Google’s Auto Update policy, most devices receive updates for around 5 years. Bear in mind that if you have hardware that is not plug and play, Windows 10 S is not for you. You would not be able to install the necessary drivers.

Sponsored

Windows 10 S notebooks will be available later this summer starting at $229.

Follow Russell on Twitter @smithrussell.

The post Making the Small Business Case for Windows 10 S appeared first on Petri.

Sandsifter checks your processor for secrets

The content below is taken from the original (Sandsifter checks your processor for secrets), to continue reading please visit the site. Remember to respect the Author & Copyright.

Are you sufficiently paranoid? If you’re not, there’s now Sandsifter. This project, just announced at Defcon 2017, tests your x86 processor for hidden instructions and bugs. “Sandsifter has uncovered secret processor instructions from every major vendor; ubiquitous software bugs in disassemblers, assemblers, and emulators; flaws in enterprise hypervisors; and both benign and security-critical hardware bugs in x86 chips,” wrote creator Christopher Domas of the Battelle Institute.

The program essentially reduces the number of possible instructions to test to a manageable 100,000 tests. Each of these tests is performed and anomalous activity is recorded for later perusal. The most important thing? Domas has found a so-called “halt and catch fire” instruction in a chip that he has declined to name. These sorts of calls – originally found in the Pentium chip and called f00f – can shut down a computer instantly, resulting in data loss. It’s the first real “f00f”-like attack found in 20 years.

Most of us won’t find anything unusual but it is useful to test your processor for, say, undocumented calls that may affect future programs. Think of it as a chkdsk for your processor.

You can download Sandsifter here and run it on your computer as long as you have the Capstone engine install installed. It can take a few hours to scan your entire system and Domas is even offering to look over anomalous logs so let him know if you find something odd.

It’s a fascinating look at chips and a space few of us have ever explored and, given that it’s so easy to try, it can’t hurt to see if someone is hiding something inside your CPU.

Mint-Condition Copy of Super Mario Bros. Sold for $30,000

The content below is taken from the original (Mint-Condition Copy of Super Mario Bros. Sold for $30,000), to continue reading please visit the site. Remember to respect the Author & Copyright.

Rugged Skylake box PC offers up to 8x USB and 5x HDMI ports

The content below is taken from the original (Rugged Skylake box PC offers up to 8x USB and 5x HDMI ports), to continue reading please visit the site. Remember to respect the Author & Copyright.

Advantech’s Linux-ready “UNO-2484G” Box PC offers dual-core 6th Gen U-series CPUs, 4x GbE ports, and either HDMI/USB or “iDoor” expansion units. Like Advantech’s Linux-on-Quark based UNO-1252G IoT gateway and Intel Apollo Lake based ARK-1124C embedded computer, the new Skylake based UNO-2484G embedded PC offers up to four of the company’s homegrown “iDoor” expansion modules. The […]

OneNote features you may not be using, but should be using!

The content below is taken from the original (OneNote features you may not be using, but should be using!), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft OneNote is an excellent tool for gathering information and collaborating with other users online. While many versions of OneNote are available in the market, OneDrive for Windows 10 app is a bit different. One major difference is that OneDrive for Windows 10 app is updated with interesting new features regularly.

OneNote features

We have already covered some basic OneNote tutorials, now let’s cover some of the latest OneNote features you want to know!

OneNote restructured OneNote features

OneNote has improved to organize your pages, notebooks, and sections on the side of the app separately. This is as shown in the image.

Easier to manage page conflicts

Page conflicts. Source: microsoft.com

A general rule with page conflicts when multiple users are editing a Notebook is, that whatever change is made later, is saved. However, OneNote has allowed to review all these changes and revive them if needed. They are arranged according to date.

Customize your pens Customize your pens. Source: microsoft.com

This is an additional feature on the Draw menu in OneNote. It allows you to customize the type, color, etc. of your pens, pencils, and highlighters. Just click on the ‘+’ symbol which is next to the pens and select whatever you wish to customize.

Immersive Reader

Immersive Reader. Source: microsoft.com

The immersive reader option is added to the View menu on OneNote. It has options using which the reader can read the text. Voice recognition recognizes the sound and highlights every word as you pronounce it. It differentiates between syllables, nouns, verbs, and adjectives.

Multitasking made easy

You could choose New Window in the View tab or press CTRL+ M. It opens a new tab in a smaller view. You could work on both of them simultaneously. While creating new windows was always an option, the new feature allows users to create a sub-window along with an existing window.

Page previews

 Show Page Previews. Source: microsoft.com

This option allows users to check the first few work of the text on a page. This helps recognize the document. It is disabled by default. To enable this feature, click on Navigation panes on the View menu and then select Show Page Previews.

Make subpages

Make subpages. Source: microsoft.com

For those who multitask with Notebooks and it’s difficult to handle too many tabs, making subpages would be quite helpful. Just select the pages, right-click the selected pages and select the option Make Subpage. The list of subpages can be expanded or compressed using the forward pointing arrow on the left-hand side.

‘Tell Me’ feature

  Tell Me feature on OneNote. Source: microsoft.com

The Tell Me feature can be accessed by either clicking the light bulb on the top right of the screen or pressing ALT + Q. While it looks similar to the Help feature, it is different and much more advanced. It makes learning OneNote easier.

Researcher on OneNote

 Researcher on OneNote. Source: microsoft.com

The researcher option lets you check the for quotes, information, etc. from Bing and copy it while adding the source for credits, automatically. To use this option, click on the Insert tab and check the option Researcher.

Check what has changed in a document

 Check what has changed in a document. Source: microsoft.com

In the newer versions of OneNote, the app highlights the changes whenever anyone else working on the Notebook edits anything.

Give a Nickname to your notebook

Nickname Notebook. Source: microsoft.com

You can give a Nickname to your notebook by right clicking on the name on the Notebook when it is open and choosing Nickname Notebook. This makes it easier to find the Notebook. It doesn’t change the name of the Notebook but adds the Nickname to the search results when trying to find it.

Give different Notebooks different colors Color of the Notebook. Source: microsoft.com

While the Nickname is a good way of classifying Notebooks, a better option would be to categorize different types of Notebooks with different colors. Just right-click on your Notebook, select Notebook color and choose your favored color.

A smarter find option

 Find text in OneNote. Source: microsoft.com

We know about the Find option, the one we can access using CTRL + F. OneNote made it better by allowing the user to search for images, handwritten notes, and many more customized features. It is different from the usual Find option.

Print to OneNote directly 

This is a feature added to the Send to OneNote app. You would have to download this app from here.

Advanced meeting details

The Meeting Details option under the Insert tab offers more option than its predecessors. You can add a note specifying the date and time, who all are to attend, and after the event, who all did not attend. It simply makes organizing and managing meetings easier.

Page Versions

 Page Versions. Source: microsoft.com

OneNote keeps note of every page version that is saved, along with the date and time. In case you wish to get it back, just click on Make Current Version. Thus, data is almost never lost unnecessarily on OneNote.

Select multiple pages

Click on the top-most or bottom most page in the list, and press CTRL or Shift to enable this mode. After that either use the arrow buttons or the mouse to select the pages or Notebooks.

Cut, copy and paste become easier on OneNote

Unlike earlier, we can cut, copy and paste pages within a OneNote Notebook. The options are available on right-clicking the page. This is different from using these options with files in general.

Proof-read text in a different language Check spelling in another language. Source: microsoft.com

In case you have text in a different language and are unable to understand it, or even recognize the language, just right click on it and click on Set Language. Those who use this feature for the first time might get an option to set the default language.

Correct the accidental undo

Accidental Undo. Source: microsoft.com

We often depend on the CTRL + Z feature to undo unwanted changes. But what if we do it accidentally? Or we simply want to scroll along the changes, OneNote has introduced little-curved arrows on the top to scroll forward and back through the changes.

Source: Office.com.

Want more? Take a look at these OneNote Tips and Tricks. Incidentally, the OneNote Windows 10 app is different from OneNote desktop software – you might want to take a look at it too!

5 Wi-Fi analyzer and survey apps for Android

The content below is taken from the original (5 Wi-Fi analyzer and survey apps for Android), to continue reading please visit the site. Remember to respect the Author & Copyright.

Wi-Fi networks have many variables and points of frustration. Different types of walls, materials and objects can impact the Wi-Fi signal in varying ways. Visualizing how the signals move about the area is difficult without the right tools. A simple Wi-Fi stumbler is great for quickly checking signal levels, but a map-based surveying tool helps you visualize the coverage, interference and performance much more easily. They allow you to load your floor plan map, walk the building to take measurements and then give you heatmaps of the signals and other data.

Most Windows-based Wi-Fi surveying tools offer more features and functionality than Android-based tools provide, such as detecting noise levels and providing more heatmap visualizations. However, if you don’t require all the bells and whistles, using an app on an Android-based smartphone or tablet can lighten your load. (And in case you’re wondering why we’re not discussing iOS apps, it’s because Apple won’t allow developers access to the Wi-Fi data, thus there can’t be any legit Wi-Fi surveying apps without jailbreaking the device.)

In this review we look at five mobile survey apps: iBwave Wi-Fi Mobile, iMapper WiFi Pro, WiFi Analyzer and Surveyor from ManageEngine, Wi-Fi Visualizer from ITO Akihiro, and WiTuners Mobile.They range from free/cheap options for surveying small office networks to those with an enterprise price tag designed to handle larger networks.

Network World - CHART - Geier Wi-Fi Survey Appa - Overview [2017] Network World / IDG

We found pros and cons in all the apps we reviewed. Paying the hefty enterprise-level price does get you many more features, such as spectrum scanning and cloud syncing; if you’re surveying larger networks they’re definitely the way to go. But as you’ll see, they aren’t perfect either. The free and cheap apps can provide basic surveying functionality for small and simple networks, plus other Wi-Fi tools that can be useful.

Targeted at the enterprise market, the iBwave Wi-Fi Mobile app carries the highest price tag we’ve seen for an Android-based survey app. We like its intuitive app design and GUI, as well as its cloud-syncing ability and compatible PC viewer app, but were disappointed it couldn’t display survey heatmaps for individual APs.

The iMapper WiFi app has some unique features and tools, especially the automated test plans, but it’s certainly not a polished product. There seem to be some bugs, the GUI is flakey and its documentation and support are almost non-existent. This app might be okay to use for simple surveys in homes and small businesses, but we don’t recommend it for professional use in larger surveys.

We found a solid app and GUI when evaluating the WiFi Analyzer and Surveyor from ManageEngine. However, the map-based surveying capabilities are quite limited, making it appropriate for only the smallest and simplest Wi-Fi networks. We did find the analyzer features to be pretty useful, however, with intuitive and attractive graphs. And the product is completely free.

Wi-Fi Visualizer is the simplest surveying app we reviewed. Unfortunately, it only captures the signal levels of the AP you’re currently connected to. For a freebie app, we did think the network map was a nice bonus and were impressed with the ability to save the SSID list and graphs on the stumbler side. This could be a free useful tool to utilize in addition to others.

The WiTuners Mobile app is also targeted towards the enterprise market. It provides some neat features, such as continuous surveying to remotely monitor the Wi-Fi, rogue AP detection and tracking, and support for a spectrum analyzer. However, the app GUI and project processes could certainly be fleshed out to be more user-friendly and intuitive, and it is the only one of the five that only works on Android tablets. It would be great if the app were phone-friendly.

Network World - CHART - Geier Wi-Fi Survey Appa - Overview [2017] Network World / IDG

iBwave Wi-Fi Mobile

The iBwave Wi-Fi Mobile app is basically a lite version of the company’s Windows PC edition. Like its other solutions, this app syncs its data to the cloud, giving you a convenient way to share projects and move between the mobile and PC editions of the software. Furthermore, the free iBwave Viewer lets your customers and other third-parties view the survey data and generate their own reports.

Pricing for the iBwave Wi-Fi Mobile app starts at $625 for a 3-month subscription, $1,250 for a 12-month subscription, or $2,680 for a perpetual license. This pricing is certainly the highest of the Android-based Wi-Fi surveying apps we’ve ever reviewed. However, if you only need it for a short period of time, iBwave can be worth the money.

We evaluated version 8.1.1.134 of the iBwave Wi-Fi Mobile app. When you start a new project, you can select a floor plan picture, take a photo of a printed drawing, or create one on the device. Selecting a picture and taking a photo is straightforward, but the in-app drawing tools lacks pre-set shapes and objects. We wouldn’t think many users would manually create a floor plan in the app, but if they did, having to draw it using a freehand pen would make it really difficult to create a usable floor plan.

Once you’ve imported or created a floor plan image, you must set the dimensions to ensure the map is to scale. You can optionally add defined zones for RF density and capacity if you plan to use its prediction functionalities, including simulating the coverage of access points without performing an actual survey and collecting data. At first it wasn’t quite clear how to apply these zones on the floor plan. The process turned out to be pretty intuitive, but it would have been nice if there were a little in-app help or tips along the way.

You can perform passive or active surveying with the iBwave app. You can conveniently utilize the iBwave Viewer PC app as the server for the active surveying. You can optionally add pushpins to the floor plans and save multiple photos, videos, audio, or text notes to that location. You can also use the mark-up feature to draw free-hand on the floor plan.

iBwave Wi-Fi - heatmap [iBwave] iBwave

When viewing the heatmaps, you can display them for the signal, throughput (if an active survey was performed), signal-to-noise ratio (using your defined noise level), overlap zones, co-channel interface and capacity. You can select to display the heatmap for the desired band, SSID or channel. However, it doesn’t allow you to easily select a particular AP to see just its heatmaps, which is unfortunate if you want to view individual coverages. Nevertheless, you can easily export the survey data file or upload to iBwave’s cloud service to view the data and generate reports in their other products, including the iBwave Viewer PC app.

The iBwave Wi-Fi Mobile app also comes with a stumbler feature called the Scan Tool. You can perform auto and manual scans for both bands or individual ones. In addition to the usual passive scanning, you can do active throughput tests, which also can test against the server built into the iBwave Viewer PC app. For passive scanning you can input advanced settings, such as the RSSI offset, receiver sensitivity and manually defined noise level for each band. However, the Scan Tool doesn’t show the security status of the APs, which would be nice for auditing reasons.

The iBwave app is worth buying for the surveying needs of large or enterprise networks. Though some features could be improved, it seems like a solid app and its integration into the iBwave solutions seems convenient.

iMapper WiFi Pro

iMapper WiFi is developed by Fullsunning Inc. It offers a free edition with limited functionality, such as being able to only save two projects, and a full edition that costs only $7.06 from the Google Play store. In addition to the typical heatmap survey functionality, it does provide several other Wi-Fi testing tools. But as you shall see, you get what you pay for.

We evaluated version 2.2 of iMapper WiFi app. When creating a new project, you can select a floor plan image or take a photo of a printed floor plan. The process is straightforward, but you can only have one floor plan image per project, which means surveying a multi-floor network requires you to set up multiple projects. Additionally, the odd scaling process (you have to adjust a circle around the image) wasn’t apparent until we discovered how to do it from Fullsunning’s website. It would have been much better if there were some sort of tip in the app to describe the process.

The default survey view displays the signal levels of the currently connected AP. You can also choose from five other views. The Channel Analysis view can show the recommended channel with the lowest signals, channel with the strongest signal and channel rating of the currently connected AP. The Network Name (SSID) and AP Device (MAC) views allow you to filter the heatmap to the signals of either the SSID or AP you choose, whereas some of the other tools only allow you to select one. The Link Quality view shows you a heatmap of the level of data rates, from bad to good.

All the heatmap views showed useful details, but it would be nice if a quick description were given in the app to clarify the data that’s shown. Unfortunately, there isn’t any way to export or save the heatmaps or collected data beyond saving screen shots. Furthermore, it seemed the app had some bug where the floor plan map would occasionally move, so the survey paths and data points weren’t aligned anymore with the locations where they were taken until we restarted the app.

iMapper Wifi Pro - analyzer [Fullsunning Inc.] Fullsunning Inc.

From the main menu, you can bring up the Analyzer page. It shows a typical channel usage graph and AP details for one or both bands at the same time, a signal-over-time graph and a signal meter for the currently connected AP, and a channel analysis chart. All these views look great and are useful. It can even give you the noise and signal-to-noise ratio if you connect an external analyzer.

From the main menu, you can also access the Tester page. This gives you some tools to test Wi-Fi association times, ping times, HTTP response times and throughput via FTP upload or download. You can run these individually or create an automated test plan to run multiple tests.

On the top of the app, you’ll also find a shortcut to the sniffer tool. It shows you the connection details of your currently connected AP. Among the basic details you can find elsewhere, it also includes the WLAN MAC address of both the Android device you’re using and the AP you’re connected to. If you have an external analyzer connected, it can also display the raw data packets seen over Wi-Fi.

We wouldn’t suggest using this app for surveying larger business-level networks, but it could be useful in home or small office environments, and its extra testing tools might be interesting.

WiFi Analyzer and Surveyor

The WiFi Analyzer and Surveyor app from ManageEngine offers very basic surveying functionality. There is no premium edition offered, just the free app. We evaluated version 2.10.

When you open the app, you’re given the option to go the Analyzer or Surveyor page. On the Surveyor page, you can easily add a floor plan from your device’s storage or Dropbox, or add its example plan to just play around. Though it doesn’t allow you to take a photo of a printed map directly within the app like the other apps reviewed, you can take a photo with your device and then select that image within the app.

When surveying, you long-tap your location instead of short-tapping like in the other apps. Initially you see tips pop-up to describe the process, which we thought useful. When you end the survey, you’re shown the report. However, as with iBwave you can’t select a single AP to see its particular coverage; you can only select an SSID.

You can switch between the heatmap and signal strength views in the survey report, which look very similar. The signal strength view shows a defined dot of each location you tapped with the corresponding color based upon the signal level, whereas the other view shows the color in a larger area with the heatmap effect. However, the heatmap effect is only slightly radiated beyond the size of the dots you see in the other view, so you just basically see blurred dots where you long-tapped (captured the signal) on the map. Just about all other survey tools use prediction to fill those gaps and create more of a full heatmap without having to walk every square-foot of a building.

On the Analyzer page of the app you’ll find four different views. The Channel view shows a typical channel usage and signal bar graph along with a list of the AP details. The Interference and Signal views provide a similar graph but are designed to give you just those details. We like that you can select the APs to show on these views based upon the signal level (best, good, weak or all), but it would be nice to view the graphs for both bands at once instead of having to flip between them. Then the Wi-Fi Details view shows you a nice text-based list of the APs, but it lacks the particular security method used.

WiFi Analyzer [farproc] farproc

In the Analyzer settings, we liked that you can set a scan interval or disable scanning, which can be useful if you just need a one-time reading or manual readings. Additionally, you can give APs an alias name to help you track them or choose not to show them. These simple settings can help on the networks you continually monitor.

This app wasn’t created with full-scale surveys of large networks in mind, so we’d only use for smaller networks or portions of a larger network.

Wi-Fi Visualizer

Creating Portable HTML in PowerShell

The content below is taken from the original (Creating Portable HTML in PowerShell), to continue reading please visit the site. Remember to respect the Author & Copyright.

In a previous Petri.com article, we were exploring ways to do more with HTML in PowerShell. At the end, I showed you a finished file that used a path to a local copy of a CSS file. There is nothing wrong with this if the file will never leave your computer or when testing. To make the document portable, you can embed the style information directly into your HTML file.

 

 

To keep my code samples a bit easier to read, I am going to be working with the same data.

$data = Get-Eventlog -List | Select @{Name="Max(K)";Expression = {"{0:n0}" -f $_.MaximumKilobytes }},
@{Name="Retain";Expression = {$_.MinimumRetentionDays }},
OverFlowAction,@{Name="Entries";Expression = {"{0:n0}" -f $_.entries.count}},
@{Name="Log";Expression = {$_.LogDisplayname}}

I will pipe $data to ConvertTo-HTML. I am also going to start using a hashtable of parameter values to splat to ConvertTo-HTML.

$convertParams = @{
 Title = "Event Log Report"
 PreContent = "<H1>$($env:COMPUTERNAME)</H1>" 
 PostContent = "<H5><i>$(get-date)</i></H5>"
}

The style information goes in the html header. I will copy the code from my CSS file to a here string variable. Then, I will add the variable to the parameter hashtable, since I have already created it.

$head = @"
<style>
body { background-color:#E5E4E2;
       font-family:Monospace;
       font-size:10pt; }
td, th { border:0px solid black; 
         border-collapse:collapse;
         white-space:pre; }
th { color:white;
     background-color:black; }
table, tr, td, th { padding: 0px; margin: 0px ;white-space:pre; }
table { margin-left:25px; }
h2 {
 font-family:Tahoma;
 color:#6D7B8D;
}
.footer 
{ color:green; 
  margin-left:25px; 
  font-family:Tahoma;
  font-size:8pt;
}
</style>
"@

$convertParams.add("Head",$head)

Let’s try it out.

$data | convertto-html @convertParams | out-file d:\temp\a.htm

Using embedded CSS (Image credit: Jeff Hicks)

Using Embedded CSS (Image credit: Jeff Hicks)

Pretty good. Although, if you look closely, you will see that I have lost my report title. When you use a header, the -Title parameter is ignored. Let’s revise the header to insert the Title tag and try again.

$convertParams = @{ 
 PreContent = "<H1>$($env:COMPUTERNAME)</H1>" 
 PostContent = "<H5><i>$(get-date)</i></H5>"
 head = @"
 <Title>Event Log Report</Title>
<style>
body { background-color:#E5E4E2;
       font-family:Monospace;
       font-size:10pt; }
td, th { border:0px solid black; 
         border-collapse:collapse;
         white-space:pre; }
th { color:white;
     background-color:black; }
table, tr, td, th { padding: 0px; margin: 0px ;white-space:pre; }
table { margin-left:25px; }
h2 {
 font-family:Tahoma;
 color:#6D7B8D;
}
.footer 
{ color:green; 
  margin-left:25px; 
  font-family:Tahoma;
  font-size:8pt;
}
</style>
"@
}

$data | convertto-html @convertParams | out-file d:\temp\a.htm

Corrected header with title (Image credit: Jeff Hicks)

Corrected Header with Title (Image credit: Jeff Hicks)

And while we are on the subject of style, here is a tip on how to get alternating bands. This is helpful for very long tables. Insert this code into your style sheet.

tr:nth-child(odd) {background-color: lightgray}

You can set the background-color value to any valid HTML color value. Another style trick is to set the table to fill more of the page and also to automatically resize when you resize the browser.

table { width:95%;margin-left:5px; margin-bottom:20px;}

Here is my final complete code:

$convertParams = @{ 
 PreContent = "<H1>$($env:COMPUTERNAME)</H1>" 
 PostContent = "<H5><i>$(get-date)</i></H5>"
 head = @"
 <Title>Event Log Report</Title>
<style>
body { background-color:#E5E4E2;
       font-family:Monospace;
       font-size:10pt; }
td, th { border:0px solid black; 
         border-collapse:collapse;
         white-space:pre; }
th { color:white;
     background-color:black; }
table, tr, td, th { padding: 2px; margin: 0px ;white-space:pre; }
tr:nth-child(odd) {background-color: lightgray}
table { width:95%;margin-left:5px; margin-bottom:20px;}
h2 {
 font-family:Tahoma;
 color:#6D7B8D;
}
.footer 
{ color:green; 
  margin-left:25px; 
  font-family:Tahoma;
  font-size:8pt;
}
</style>
"@
}

$data | convertto-html @convertParams | out-file d:\temp\a.htm

A better formatted report (Image credit: Jeff Hicks)

A Better Formatted Report (Image credit: Jeff Hicks)

This is a much nicer looking report and I hope you will try the code out for yourself to see this in action.

Sponsored

My sharp eyed readers may have noticed a section of the style sheet that defined a footer. However, the final result does not appear to be using it. How is that supposed to work? Or let’s say I want any event log with 0 entries to display in red. How can I do that? I will show you how in the next article.

The post Creating Portable HTML in PowerShell appeared first on Petri.

Motorcycle helmets finally get decent heads-up display navigation

The content below is taken from the original (Motorcycle helmets finally get decent heads-up display navigation), to continue reading please visit the site. Remember to respect the Author & Copyright.

I’m a huge proponent of reducing any and all distractions while riding a motorcycle, scooter, or moped. Helmets and padded gear are great, but when you get down to it, riders are still just squishy people zipping through traffic next to giant machines that could kill you if a driver sneezes or decides to text a friend. So the idea of a HUD (Heads Up Display) for a motorcycle is equal parts intriguing and terrifying.

Done right, it keeps your head up and eyes off your gauges and whatever navigation system you have strapped to your handlebars. Done wrong, and it’s a one-way ticket to the emergency room because you were spending too much time going through menus and trying to find relevant information instead of paying attention to the car in front of you that just slammed on its brakes. A fender bender in a car is a annoyance. A fender bender on a bike could land you in the ICU.

In comes the $700 Nuviz, a HUD for full-face helmets. The device’s purpose is to keep you informed while riding without adding too much distraction that could lead to hospitalization. And for the most part, it succeeds.

It shows your speed, navigation, maps, calls and your music via a tiny mirrored see-thru display that sits below the vision-line of your right eye. It’s there when you need it and you can almost ignore it when you don’t.

To see the information about your ride, you peer downward at the display which is focused about 13.5 feet in front of you. That means you’re refocusing your eyes, but the same thing happens when you look at your gauge cluster. Fortunately, the main screen is tailored for quick glances. Your speed and next turn are easily discernible by quickly peeking downward without moving your head which is Nuviz’s advantage over the dials that came with your bike.

Plus, the Nuviz supports audio and comes with the headset that can be installed in a helmet or it’ll sync to Bluetooth-enabled helmets. It’s a bit of a multimedia experience right on your noggin.

My apprehension about the potential for distraction intensified when I installed it on my helmet. From the outside, it’s huge. And while its 8.5 ounce weight didn’t bother me, for some folks with lightweight helmets, that might be a deal breaker. But when I actually put on my helmet all I saw was the tiny display which was a relief.

Riding with the Nuviz also reduced my anxiety. With the combination or visual and audio cues, I was finally able to navigate to a destination without pulling over and checking my phone or attaching it to the mount I bought a few years back and have only used twice because I’m sure my iPhone will fall out of it and break into a thousand pieces on US 101.

The display was bright enough to be legible in direct sunlight, although there were some tiny rainbow-colored dots that appeared in the glass. It wasn’t enough to block the information, but it’s there and while beautiful at times, it’s just another thing you’ll catch yourself looking at.

Navigating the menu system was simple enough with the supplied controller you attach near your left handlebar. An up and down lever scrolls through the main features and it’s surrounded by four action buttons. After a few hours riding using it becomes as second nature as activating my turn signals, high beams or horn.

The controller is also how you turn on the device’s 8 megapixel camera. With it you can take video and photos of your ride. The quality won’t replace a GoPro, but the photos were good enough to capture deer in the brush next to the road. The 1080p video quality is reminiscent of a smartphone from five years ago. It’s basically satisfactory and really the allure is that you don’t have to stop and pull out a camera to capture a moment.

It also might lead to gigantic slideshows, I took 100 photos during a ride around Mount Tamalpais. It’s very easy to just tap the photo button on the controller while riding.

Yet those are the kind of rides the Nuviz is built for. Long excursions on roads without heavy traffic. It was only during that type of jaunt that I felt comfortable turning on music (something I would never do while riding in San Francisco) and taking photos. The companion app makes creating a route with multiple stops that you send to the device a breeze and the actual navigation both on screen and in ear, was easy to follow without being overly distracting.

The device and controller are both easy to remove and reattach to your bike and helmet so you don’t have to check your bike every five minutes during lunch breaks. That also means you can ditch the whole system when doing short rides around town. In my experience, the Nuviz didn’t add much value to my daily commute. I know where I’m going and the roads are for too congested to even think about using it.

Plus, when it’s attached to your helmet, it’s never 100 percent gone. The tiny display, while helpful, is still in your peripheral. You sort of learn to ignore it, but when you’re lane splitting (only legal in California) and keeping an eye out for one of San Francisco’s many bike-swallowing potholes, you don’t need another (no matter how small) distraction.

But for weekend jaunts, the Nuviz is outstanding. It’s eight hour battery life should keep you on your route for the entire day and it’s on-board GPS and downloaded maps means even if you lose signal, you won’t get lost. For Kawasaki KLR and BMW GS riders, it’s a great little companion. But for daily riders in congested cities, it’s best to focus on the act of riding.

Microsoft Azure leads the industry in ISO certifications

The content below is taken from the original (Microsoft Azure leads the industry in ISO certifications), to continue reading please visit the site. Remember to respect the Author & Copyright.

We are happy to announce that Microsoft Azure recently completed a new set of independent third-party ISO and Cloud Security Alliance (CSA) audits to expand our certification portfolio.  Azure leads the industry with the most comprehensive compliance coverage, enabling customers to meet a wide range of regulatory obligations.  If that were not enough, having a program with industry leading depth and coverage specific to ISO is exponentially useful to our customers globally as ISO standards provide baselines for information security management that are relied upon by many other standards across regulated industries and markets worldwide. 
 
A combination of our ISO and CSA certifications exist in all four Azure clouds, and coverage is now newly expanded across the following clouds

image

 

imageNew and Expanded ISO
Achieving the ISO 20000-1:2011 certification specifically underscores Azure’s commitment to deliver quality IT service management to customers and demonstrates Azure’s capability to monitor, measure, and improve service management processes. 

Expanded CSA STAR Certification
The CSA STAR Certification involves a rigorous independent third-party assessment of a cloud provider’s security posture that combines ISO 27001 certification with criteria specified in the CSA Cloud Controls Matrix. Azure maintains the highest possible Gold Award for the maturity capability assessment of the CSA STAR Certification, and as previously stated, is now available in the Azure Government cloud.

Depth of Coverage – the Most Services
In addition to the broadest compliance portfolio amongst enterprise cloud providers, Azure maintains the deepest coverage as measured by how many customer-facing services are in audit scope.  For example, recently completed Azure ISO 27001 and ISO 27018 audits have 61 customer-facing services in audit scope, making it possible for customers to build realistic ISO-compliant cloud applications with end-to-end platform coverage. 

Go Read Our Reports and Certificates!
The CSA STAR Certification for Microsoft Azure can be downloaded from the CSA Registry. Our ISO reports and certificates can be downloaded from the Service Trust Portal

Case study finds the drawbacks of Facebook’s ‘free’ internet

The content below is taken from the original (Case study finds the drawbacks of Facebook’s ‘free’ internet), to continue reading please visit the site. Remember to respect the Author & Copyright.

Facebook’s successor to its Internet.org "web for all" service, Free Basics, has had a troubled rollout. India notoriously shut it out of the country back in February 2016 for violating net neutrality (prioritizing Facebook services), with Egypt blocking it months later over privacy concerns. Still, the social titan has deployed Free Basics’ free internet to 63 countries in Africa, Asia and Latin America — and wants to bring it to the United States soon. But a report released today criticizes the service’s shortcomings and claims its decision to allow access to some sites but not others violates net neutrality.

Citizen media nonprofit Global Voices summarized their 36-page paper with several key complaints. First, Free Basics does not adequately meet the linguistic needs where it operates, especially in its failure to operate in more than one language in multilingual countries like Pakistan and the Philippines. The thousand-odd sites supported by the service are mostly provided by corporations in the US and UK, leaving a huge gap of services relevant to local issues and needs. In short, it doesn’t connect users to the full internet — and even prevents them from accessing competing social media services — but continues to collect data on them.

It’s this prioritized access to some services, and continually pressuring users to sign up for Facebook’s app, that leads Global Voices to claim Free Basics violates net neutrality. The platform separates third party services into two tiers: Those that meet particular technical requirements that are difficult for low-resource organizations to fulfill (as well as route their traffic through Facebook’s servers) which are prominently presented, and those that don’t which are tucked away. Further, Free Basics pointedly doesn’t include email — or even Twitter — limited user ability to communicate online. Plus, Facebook is harvesting user behavior data the whole time.

This isn’t surprising criticism, given India’s resistance to these very practices — not just from Free Basics, but from every "zero-rating" service, as they’re known. But beyond the report’s broad complaints are specific statistics suggesting people aren’t even using Free Basics for its intended purpose as a primary internet connection. A survey of 8,000 Free Basics users by the Alliance for Affordable Internet found that only 12 percent had never been on the internet before. Furthermore, 35 percent use Free Basics as a supplementary service to regular data plans and public WiFi.

When reached for comment, Facebook rejected the sample size of the study as not representative of its many users.

"Our goal with Free Basics is to help more people experience the value and relevance of connectivity through an open and free platform," a Facebook spokesperson said. "The study released by Global Voices, and the subsequent article in the Guardian, include significant inaccuracies. The study, based on a small group of Global Voices contributors in only a handful of countries, does not reflect the experiences of the millions of people in more than 65 countries who have benefited from Free Basics."

Source: Global Voices

First Battery-Free Cellphone Harvests Power from Ambient Radio Signals, Light

The content below is taken from the original (First Battery-Free Cellphone Harvests Power from Ambient Radio Signals, Light), to continue reading please visit the site. Remember to respect the Author & Copyright.

http://bit.ly/2uMXmoi

Installing the new ROM release for your Titanium

The content below is taken from the original (Installing the new ROM release for your Titanium), to continue reading please visit the site. Remember to respect the Author & Copyright.

Elesar emailed all its clients and announced on the newsgroups that there was a new software update for the Titanium. In this article we will download and install it with a sequel to look at the new features.

As well as the ‘vanilla’ Titanium, CJEmicro’s and R-Comp have systems based on the board. As my machine is from R-Comp, I checked with Andrew Rawnsley about whether it was a good idea to install or wait for an official update from them. R-COMP are indeed planning to do a proper machine-specific update once they had done their own testing. You can wait for them or you can use the new update. If you have a machine from CJEmicro’s I would confirm their advice first.

The Elesar download link actually takes you to a download page on the ROOL website where you have a choice of downloads, depending on how ‘cutting edge’ you would like to be. The bottom item is the recommended stable release and it is twice as big because it includes a second version of the ROM.

The official download is the 5 meg download which contains everything you need to upgrade your Titanium and a clear and helpful readme.

There is a potential risk for things to go wrong, so you are advised to make sure you have backups of all your data before you start (always a good idea to keep regular backups in any case!). Murphy’s law generally means the more prepared you are the less likely things will go wrong…

Two versions of the new OS release are supplied, with and without zpp included. Which one you choose will be down to your personal preferences and the software you are using.

The actual upgrade consists of 3 steps:-
1. Update the software on your disk (using Merge to update !Boot with any changes).
2. Sanity check by soft loading the ROM on your machine using the softload obey file, just to make sure. If there are any issues, you can then revert back to the original with a quick reboot.
3. Use the FlashSQPI application to burn a new copy of the ROM onto your system. This can be a little time-consuming and should not be interrupted. Once it is done, you can reboot the machine.

Before you do any of this, it is worth reading the readme fully TWICE.

It is very easy to see if the machine has been updated.

You have an update emachine running the latest version of RISC OS for your machine. Next time we will look at what is new…

No comments in forum

netstress (2.0.9686)

The content below is taken from the original (netstress (2.0.9686)), to continue reading please visit the site. Remember to respect the Author & Copyright.

NetStress – Network Benchmarking Tool For Wired and Wireless Networks

trid (2.24.20170722)

The content below is taken from the original (trid (2.24.20170722)), to continue reading please visit the site. Remember to respect the Author & Copyright.

TrID is an utility designed to identify file types from their binary signatures

Gigabyte jumps into SBC market with 3.5-inch Apollo Lake model

The content below is taken from the original (Gigabyte jumps into SBC market with 3.5-inch Apollo Lake model), to continue reading please visit the site. Remember to respect the Author & Copyright.

Gigabyte’s “GA-SBCAP3350” SBC is equipped with a dual-core Celeron N3350, plus 2x GbE, SATA, and USB 3.1 ports, and HDMI, mSATA, and mini-PCIe. Motherboard maker Gigabyte posted a product page on what appears to be its first 3.5-inch SBC. The fanless GA-SBCAP3350 is designed to run Windows 10 on a dual-core Celeron N3350 from Intel’s […]

User Group Newsletter – July 2017

The content below is taken from the original (User Group Newsletter – July 2017), to continue reading please visit the site. Remember to respect the Author & Copyright.

Sydney summit

We’re getting excited as the Sydney summit draws closer. Don’t miss out on your early bird tickets, sales end September 1. Find your summit pocket guide hereIt includes information about where to stay, featured speakers, a summit timeline, the OpenStack academy and much more.

An important note regarding travel. All non-Australian residents will need a visa to travel to Australia (including United States citizens). Click here for more information 

Travel support program

Need some support to make the trip? You can apply for the travel support programSuperuser has a great article with handy tips to help you complete your application. Find the superuser article here. 

Superuser Awards

The Superuser Awards recognize teams using OpenStack to meaningfully improve business and differentiate in a competitive industry, while also contributing back to the community. Nominations for the OpenStack Summit Sydney Superuser Awards are open and will be accepted through midnight Pacific Time September 8. Find out more information via this Superuser article. 

User survey

Make your voice heard in the User survey. It’s available in 7 languages (Chinese (traditional & simplified), Japanese, Korean, German, Indonesian). Submissions close on the 11th of August. Complete it here. 

User committee elections

The User Committee Elections are right around the corner. Active user contributors (AUCs) — including operators, contributors, event organizers and working group members, are invited to apply. Nominations open July 31 and close on August 11th. Find out all you need know with this superuser article. 

Boston Summit recap

We hope you all enjoyed the Boston summit in May. Catch up on the sessions you weren’t able to see, via the OpenStack Foundation’s YouTube channel.

Certified OpenStack Administrator exam

OpenStack skills are in high demand as thousands of companies around the world adopt and productize OpenStack. The COA is the first professional certification offered by the OpenStack Foundation. It’s designed to help companies identify top talent in the industry, and help job seekers demonstrate their skills. For more information, head to the COA website. You can also check out this video. 

Call for papers

There are a number of call for papers for upcoming events

OpenStack days

There are a number of upcoming OpenStack days this year across the globe. See the full calendar here.

New User groups

Welcome to our new user groups!

Looking for your local user group or want to start one in your area? Head to the groups portal.

Looking for speakers?

If you’re looking for speakers for your upcoming event or meetup, check out the OpenStack Speakers Bureau. It contains a fantastic repository of contacts, including information such as their past talks, languages spoken, country of origin and travel preferences.

Jobs portal

Find that next great opportunity via the OpenStack jobs portal.

Are you following the Foundation on social media? Check out each of our channels today.

Twitter, Linkedin, Facebook, YouTube

 

Contributing to the User Group newsletter.

If you’d like to contribute a news item for next edition, please submit to this etherpad.

Items submitted may be edited down for length, style and suitability.

Windows AMI Patching and Maintenance with Amazon EC2 Systems Manager

The content below is taken from the original (Windows AMI Patching and Maintenance with Amazon EC2 Systems Manager), to continue reading please visit the site. Remember to respect the Author & Copyright.

The Automation service, which is part of Amazon EC2 Systems Manager, helps you save time and the effort associated with routine management operations. Automation workflows are streamlined, repeatable, and auditable. For example, you can easily automate manual tasks such as golden image creation, baking applications into Amazon Machine Images (AMIs), or patching and updating agents.

In a recent post on the AWS Blog (Streamline AMI Maintenance and Patching Using Amazon EC2 Systems Manager), AWS announced the availability of the first public Document for Automation: AWS-UpdateLinuxAmi. This Document streamlines patching for Linux AMIs, allowing you to get started quickly with a predefined Automation workflow managed by AWS.

Today, AWS announces the updated availability of the Windows equivalent: AWS-UpdateWindowsAmi. The AWS-UpdateWindowsAmi Document is a great fit for building a hardened AMI from the monthly Windows AMI release, applying Windows patches and AWS agent updates to your proprietary Windows AMI, or baking applications into a golden Windows AMI as part of your CI/CD pipeline. You can also use your custom AMIs as a source for images that meet organizational IT policies. Documents help centrally create, manage, and share code for IT Ops and the management tasks that Systems Manager can perform on your managed infrastructure.

The AWS-UpdateWindowsAmi Document automates the following workflow:

  1. Launch a temporary EC2 instance from a source Windows AMI.
  2. (Optional) Invoke a user-provided, pre-update hook script.
  3. Update EC2Config or EC2Launch (determined by the version of Windows launched in step 1).
  4. Update the SSM Agent.
  5. Update the AWS PV driver.
  6. Install Windows updates.
  7. (Optional) Invoke a user-provided, pre-update hook script on the instance.
  8. Run Sysprep /generalize.
  9. Stop the temporary instance.
  10. Create a new AMI from the stopped instance.
  11. Tag the new image.
  12. Terminate the instance.

Prerequisites

If you haven’t used Automation before, you must configure IAM roles and permissions. This CloudFormation template completes the required prerequisite actions. After logging on to your AWS account, choose Launch Stack. The template:

  • Creates a service role for Automation.
  • Grants PassRole permission to authorize a user to provide the service role.
  • Creates an instance role to enable instance management under Systems Manager.

Executing Automation

1. In the EC2 console, choose Systems Manager, Automations.

2. Choose Run automation document.

3. Under Document name, choose AWS-UpdateWindowsAmi. Use the $DEFAULT document version.

4. For the SourceAmiId variable, enter the ID of the Windows AMI to update. This is the only required field. The InstanceIamRole and AutomationAssumeRole variables are assigned default values that match the resource IDs generated by the CloudFormation template executed earlier.

Optionally, specify values for the following (descriptions for each variable are listed in the console):

  • Target AMI name
  • Instance type
  • KBs to include or exclude
  • Update categories (Critical Update, Security Update)
  • Severity levels (MSRC level such as Critical, Important, Low)
  • Any pre– or post-update scripts to run

5. Choose Run Automation.

6. Monitor progress in the Automation Steps tab, and view the step-level outputs.

After execution is complete, you can view any outputs returned by the workflow in the Description tab. In this example, AWS-UpdateWindowsAmi returns the new AMI ID and is tagged with your source AMI ID.

Next, choose Images, AMIs to view your new AMI.

There is no additional charge to use the Automation service, and any resources created by a workflow incur published usage charges. If you terminate AWS-UpdateWindowsAmi before reaching the “Terminate Instance” step, you should shut down the temporary instance created by the workflow.

Conclusion

Now that you’ve successfully run AWS-UpdateWindowsAmi, you may want to create default values for the service and instance roles. You can customize your workflow by creating your own Automation Document based on AWS-UpdateWindowsAmi. For more details, see Create an Automation Document. After you’ve created your Document, you can write additional steps and add them to the workflow.

Example steps include:
• Updating an Auto Scaling group with the new AMI ID (aws:invokeLambdaFunction action type)
• Creating an encrypted copy of your new AMI (aws:encryptedCopy action type)
• Validating your new AMI using Run Command with the RunPowerShellScript Document (aws:runCommand action type)
Automation also makes a great addition to a CI/CD pipeline for application bake-in, and can be invoked as a CLI build step in Jenkins. For details on these examples, see the Systems Manager Automation technical documentation. Be sure to follow the Management Tools blog for additional deep dives on maintaining Windows AMIs using AWS-UpdateWindowsAmi.

About the author

Venkat Krishnamachari is a Product Manager in the Amazon EC2 Systems Manager team. Venkat is excited by the opportunities presented by cloud computing, and loves helping customers realize the value of efficient infrastructure and management. In his personal time Venkat volunteers with NGOs and loves producing live theater and music shows.

Help keep your Google Cloud service account keys safe

The content below is taken from the original (Help keep your Google Cloud service account keys safe), to continue reading please visit the site. Remember to respect the Author & Copyright.

By Grace Mollison, Cloud Solutions Architect

Google Cloud Platform (GCP) offers robust service account key management capabilities to help ensure that only authorized and properly authenticated entities can access resources running on GCP.

If an application runs entirely on GCP, managing service account keys is easy  they never leave GCP, and GCP performs tasks like key rotation automatically. But many applications run on multiple environments  local developer laptops, on-premises databases and even environments running in other public clouds. In that case, keeping keys safe can be tricky.

Ensuring that account keys aren’t exposed as they move across multiple environments is paramount to maintaining application security. Read on to learn about best practices you can follow when managing keys in a given application environment.

Introducing the service account

When using an application to access Cloud Platform APIs, we recommend you use a service account, an identity whose credentials your application code can use to access other GCP services. You can access a service account from code running on GCP, in your on-premises environment or even another cloud.

If you’re running your code on GCP, setting up a service account is simple. In this example, we’ll use Google Compute Engine as the target compute environment.

Now that you have a service account, you can launch instances to run from it. (Note: You can also temporarily stop an existing instance and restart it with an alternative service account).

  • Next, install the client library for the language in which your application is written. (You can also use the SDK but the client libraries are the most straightforward and recommended approach.) With this, your application can use the service account credentials to authenticate applications running on the instance. You don’t need to download any keys because you are using a Compute Engine instance, and we automatically create and rotate the keys.

Protecting service account keys outside GCP

If your application is running outside GCP, follow the steps outlined above, but install the client library on the destination virtual or physical machine. When creating the service account, make sure that you’re following the principles of least-privilege. This is good practice in all cases, but it becomes even more important when you download credentials, as GCP no longer manages the key, increasing the risk of it being inadvertently exposed.

In addition, you’ll need to create a new key pair for the service account, and download the private key (which is not retained by Google). Note that with external keys, you’re responsible for security of the private key and other management operations such as key rotation.

Applications need to be able to use external keys to be authorized to call Google Cloud APIs. Using the Google API client libraries facilitates this. Google API client libraries use Application Default Credentials for a simple way to get authorization credentials when they’re called. When using an application outside of GCP, you can authenticate using the service account for which the key was generated by pointing the GOOGLE_APPLICATION_CREDENTIALS environment variable to the location where you downloaded the key.


Best practices when downloading service account keys

Now you have a key that can gain access to GCP resources, you need to ensure that you manage the key appropriately. The remainder of this post focuses on best practices to avoid exposing keys outside of their intended scope of use. Here are best practices to follow:

  1. If you’ve downloaded the key for local development, make sure it’s not granted access to production resources.
  2. Rotate keys using the following IAM Service Account API methods:
    • ServiceAccounts.keys.create()
    • Replace old key with new key
    • ServiceAccounts.keys.delete()
  3. Consider implementing a daily key rotation process and provide developers with a cloud storage bucket from which they can download the new key every day.
  4. Audit service accounts and keys using either the serviceAccount.keys.list() method or the Logs Viewer page in the console.
  5. Restrict who is granted the Service Account Actor and Service Account User role for a service account, as they will have full access to all the resources.
  6. Always use the client libraries and the GOOGLE_APPLICATION_CREDENTIALS for local development.
  7. Prevent developers from committing keys to external source code repositories.
  8. And finally, regularly scan external repositories for keys and take remedial action if any are located.

Now let’s look at ways to implement some of these best practices.

Key rotation

Keyrotator is a simple CLI tool written in Python that you can use as is, or as the basis for a service account rotation process. Run it as a cron job on an admin instance, say, at midnight, and write the new key to Cloud Storage for developers to download in the morning.

It’s essential to control access to the Cloud Storage bucket that contains the keys. Here’s how:

  1. Create a dedicated project setup for shared resources.
  2. Create a bucket in the dedicated project; do NOT make it publicly accessible.
  3. Create a group for the developers who need to download the new daily key.
  4. Grant read access to the bucket using Cloud IAM by granting the storage.objectViewer role to your developer group for the project with the storage bucket.

If you wish to implement stronger controls, use the Google Cloud Key Management Service to manage secrets using Cloud Storage.


Prevent committing keys to external source code repositories

You should not need to keep any keys with your code, but accidents happen and keys may inadvertently get pushed out with your code.

One way to avoid this is not to use external repositories and put processes in place to prevent their use. GCP provides private git repositories for this use case.

You can also put in place preventive measures to stop keys from being committed to your git repo. One open-source tool you can use is git-secrets. This is configured as a git hook when installed

It runs automatically when you run the ‘git commit’ command.

You need to configure git-secrets to check for patterns that match service account keys. This is fairly straightforward to configure:

Here’s a service account private key when downloaded as a JSON file:

{
 "type": "service_account",
 "project_id": "your-project-id",
 "private_key_id": "randomsetofalphanumericcharacters",
 "private_key": "-----BEGIN PRIVATE KEY-----\thisiswhereyourprivatekeyis\n-----END PRIVATE KEY-----\n",
 "client_email": "[email protected]",
 "client_id": "numberhere",
 "auth_uri": "http://bit.ly/2vkRh07;,
 "token_uri": "http://bit.ly/2uK9ebm;,
 "auth_provider_x509_cert_url": "http://bit.ly/2vk9fzV;,
 "client_x509_cert_url": "http://bit.ly/2uK4iDI;
}

To locate any service account keys, look for patterns that match the key name such as ‘private_key_id’ and ‘private_key’. Then, to locate any service account files in the local git folder, add the following registered patterns:

git secrets --add 'private_key'
git secrets --add 'private_key_id'

Now, when you try to run ‘git commit’ and it detects the pattern, you’ll receive an error message and be unable to do the commit unless mitigating action is taken.

This screenshot shows a (now deleted) key to illustrate what developers see when they try to commit files that may contain private details.

Scan external repositories for keys

To supplement the use of git-secrets you can also run the open-source tool trufflehog. Trufflehog searches a repo’s history for secrets by using entropy analysis (it uses Shannon entropy) to find any keys that may have been uploaded.


Conclusion

In this post, we’ve shown you how to help secure service account keys, whether you’re using them to authenticate applications running exclusively on GCP, in your local environment or in other clouds. Follow these best practices to avoid accidentally revealing keys and to control who can access your application resources. To learn more about authentication and authorization on GCP, check out our Authentication Overview.