Addison Lee is looking into self-driving taxis too

The content below is taken from the original (Addison Lee is looking into self-driving taxis too), to continue reading please visit the site. Remember to respect the Author & Copyright.

Driverless car trials are happening all around the UK, but the epicentre is arguably Greenwich, in London. We’ve seen driverless pods ferry passengers around the O2 and autonomous delivery vans drop off Ocado hampers near Woolwich. That’s because a chunk of the borough has been ring-fenced as a "Smart Mobility Living Lab" for autonomous projects and research. The latest initiative to fall under that banner is "Merge," which will look at how a driverless ride-sharing service could work in the city. The work will be led by Addison Lee, alongside a consortium that includes Ford, the Transport Research Laboratory and Transport Systems Catapult.

The group will spend 12 months developing a "blueprint" for how a self-driving, publicly accessible transport system could be run in the city. It will cover a range of social, commercial and infrastructure issues, including how the public might react to driverless technology, how it could be designed to compliment existing transport options, and the impact it would have on local communities and journey times. The "plan" will also include an "advanced simulation" and a general business model outlining the costs and recommended vehicle specifications. It’s not clear, however, how much testing will be done with self-driving cars in the real world.

Via: The Telegraph

Source: Addison Lee (PR)

Office 365 Growth Doesn’t Reduce SLA Performance

The content below is taken from the original (Office 365 Growth Doesn’t Reduce SLA Performance), to continue reading please visit the site. Remember to respect the Author & Copyright.

Office 365 Change Management

Office 365 Change Management

Strong SLA Performance Since 2013

Office 365 continues to grow strongly and contribute to Microsoft’s cloud resources. Now supporting more than 100 million monthly active users, Office 365 has experienced some recent hiccups in service quality, but if you look at the Service Level Availability (SLA) quarterly data for Office 365 posted by Microsoft, it shows that service availability has been robust since 2013 (Table 1), which is when Microsoft first started to publish the SLA results.

Q1 2013 Q2 2013 Q3 2013 Q4 2013 Q1 2014 Q2 2014 Q3 2014 Q4 2014
99.94% 99.97% 99.96% 99.98% 99.99% 99.95% 99.98% 99.99%
Q1 2015 Q2 2015 Q3 2015 Q4 2015 Q1 2016 Q2 2016 Q3 2016 Q4 2016
99.99% 99.95% 99.98% 99.98% 99.98% 99.98% 99.99% 99.99%
Q1 2017 Q2 2017
99.99% 99.97%

Table 1: Office 365 SLA performance since 2013

The latest data, posted for Q2 2017 (April through June), shows that Office 365 delivered 99.97% availability in that period. The Q2 result marked a slight decrease in availability over the prior seven quarters. Even so, the fact that a massive cloud service posts 99.97% availability is impressive.

Financial Commitment

Microsoft takes the SLA seriously because they “commit to delivering at least 99.9% uptime with a financially backed guarantee.” In other words, if the Office 365 SLA for a tenant slips below 99.9% in a quarter, Microsoft will compensate the customer with credits against invoices.

Last November, I posed the question whether anyone still cared about the Office 365 SLA. The response I received afterwards showed that people do care, largely for two reasons. First, IT departments compare the Office 365 SLA against the SLA figures they have for on-premises servers to reassure the business that the cloud is a safe choice. Second, they use the data to resist attempts to move other platforms like Google G-Suite. Google offers a 99.9% SLA guarantee for G Suite, but they seem to be not as transparent about publishing their results.

Downtime Matters

Microsoft calculates the Office 365 SLA in terms of downtime, or minutes when incidents deprive users of a contracted service such as Exchange Online or SharePoint Online. As an example of the calculation, if you assume that Microsoft has 100 million active users for Office 365, the total number of minutes available to Office 365 users in a 90-day quarter is 12,960,000,000. Achieving a 99.97% SLA means that Microsoft considers incidents caused downtime of 3,888,000,000 minutes or 64,800,000 hours. These are enormous numbers, but put in the context of the size of Office 365, each Office 365 lost just 39 minutes of downtime during the quarter.

Of course, some users experienced zero downtime. Incidents might not have affected their tenant or they might not have been active when an incident happened. On the other hand, some tenants might have had a horrible quarter. Remember that Office 365 spreads across twelve datacenter regions and the service varies from region to region and from tenant to tenant, a fact that you should always remember when a Twitter storm breaks to discuss a new outage.

Influencing Factors

To better understand what the Office 365 SLA means, we need to take some other factors into account. These are described in Microsoft’s Online Services Consolidated Service Level Agreement.

First, among the exclusions applied by Microsoft we find they can ignore problems that “result from the use of services, hardware, or software not provided by us, including, but not limited to, issues resulting from inadequate bandwidth or related to third-party software or services;”

Defining what inadequate bandwidth means is interesting. For example, if a new Office feature like AutoSave consumes added bandwidth and causes a problem for other Office 365 applications, is that an issue for Microsoft or the customer?

Second, although the term “number of users” occurs 38 times in Microsoft’s SLA document, no definition exists for how to calculate the number of users affected by an incident. This might be as simple as saying that Microsoft counts all the licensed users when an incident affects a tenant. On the other hand, it is possible that an incident is localized and does not affect everyone belonging to a tenant. Knowing how many users an incident affects is important because the number of lost minutes depends on how many people cannot work because an incident is ongoing.

Third, Microsoft must accept that an incident is real before it starts the downtime clock. A certain delay is therefore inevitable between a user first noticing a problem and the time when Microsoft support acknowledges that an issue exists. Users might not be able to work during this time, but Microsoft does not count this lost time in the SLA statistics and availability seems better than it is through the eyes of end users.

You might also quibble about when Microsoft declares an incident over and stops the downtime clock as it might take some further time before Microsoft fully restores a service to the satisfaction of a tenant. On the other hand, Microsoft does count time when an incident is in progress outside the normal working day when users might not be active, which evens things out somewhat.

Local Performance More Important than Global

As I have argued before, Office 365 is now so big that it is meaningless to report an SLA for the worldwide service. What tenants really care about is the quality and reliability of the service they receive from their local Office 365 datacenter region, whether that is in the U.S., Germany, Japan, or elsewhere. This is the reason why ISVs like Office365Mon and ENow Software create products to allow tenants to measure SLA or the quality of service on an ongoing basis.

It would be good if Microsoft sent tenant administrators a quarterly email to give the overall SLA performance and the performance for the tenant, together with details of the incidents that contributed to the quarterly result. Tenants could then compare Microsoft’s data with their own information about the reliability of Office 365. This would be real transparency about operations and make the SLA more realistic and usable.

Sponsored

SLA for Marketing

The calculation of the Office 365 SLA is completely in Microsoft’s hands. I make no suggestion that the data reported by Microsoft is inaccurate or altered in any way. As a user of the service since its launch in 2011, I consider Office 365 to be very reliable. However, the lack of detail (for example, SLA performance by datacenter region and service) makes it easy to think that the reported SLA data is purely a marketing tool.

In fact, the only true measurement of a service’s ability to deliver great availability is what its end users think. That measurement is unscientific, totally subjective, and prone to exaggeration, but it is the way the world works.

Follow Tony on Twitter @12Knocksinna.

Want to know more about how to manage Office 365? Find what you need to know in “Office 365 for IT Pros”, the most comprehensive eBook covering all aspects of Office 365. Available in PDF and EPUB formats (suitable for iBooks) or for Amazon Kindle.

The post Office 365 Growth Doesn’t Reduce SLA Performance appeared first on Petri.

UKCloud Launches Cloud GPU Services; Becomes First European Cloud Service Provider with NVIDIA GPU Accelerated Computing

The content below is taken from the original (UKCloud Launches Cloud GPU Services; Becomes First European Cloud Service Provider with NVIDIA GPU Accelerated Computing), to continue reading please visit the site. Remember to respect the Author & Copyright.

UKCloud has today announced the launch of its Cloud GPU computing service based on NVIDIA virtual GPU solutions with NVIDIA Tesla P100 and M60 GPUs (graphics processing units). The service will support computational and visualization intensive workloads for UKCloud’s UK public sector and healthcare customers. UKCloud is not only the first Cloud Service Provider based in the UK or Europe to offer Cloud GPU computing services with NVIDIA GPUs, but is also the only provider specializing in public sector and healthcare and the specific needs of these customers.

“Building on the foundation of UKCloud’s secure, assured, UK-Sovereign platform, we are now able to offer a range of cloud-based compute, storage and GPU services to meet our customers’ complex workload requirements,” said Simon Hansford, CEO of UKCloud. “The public sector is driving more complex computational and visualization intensive workloads than ever before, not only for CAD development packages, but also for tasks like the simulation of infrastructure changes in transport, for genetic sequencing in health or for battlefield simulation in defense. In response to this demand, we have a greater focus on emerging technologies such as deep learning, machine learning and artificial intelligence.”

Many of today’s modern applications, especially in fields such as medical imaging or graphical analytics, need an NVIDIA GPU to power them, whether they are running on a laptop or desktop, on a departmental server or on the cloud. Just as organizations are finding that their critical business applications can be run more securely and efficiently in the cloud, so too they are realizing that it makes sense to host graphical and visualization intensive workloads there as well.

Adding cloud GPU computing services utilizing NVIDIA technology to support more complex computational and visualization intensive workloads was a customer requirement captured via UKCloud Ideas, a service that was introduced as part of UKCloud’s maniacal focus on customer service excellence. UKCloud Ideas proactively polls its clients for ideas and wishes for service improvements, enabling customers to vote on ideas and drive product improvements across the service. This has facilitated more than 40 feature improvements in the last year across UKCloud’s service catalogue from changes to the customer portal to product specific improvements.

One comment came from a UKCloud partner with many clients needing GPU capability: “One of our applications includes 3D functionality which requires a graphics card. We have several customers who might be interested in a hosted solution but would require access to this functionality. To this end it would be helpful if UKCloud were able to offer us a solution which included a GPU.”

Listening to its clients in this way and acting on their suggestions to improve its service by implementing NVIDIA GPU technology was one of a number of initiatives that enabled UKCloud to win a 2017 UK Customer Experience Award for putting customers at the heart of everything, through the use of technology.

“The availability of NVIDIA GPUs in the cloud means businesses can capitalize on virtualization without compromising the functionality and responsiveness of their critical applications,” added Bart Schneider, Senior Director of CSP EMEA at NVIDIA. “Even customers running graphically complex or compute-intensive applications can benefit from rapid turn-up, service elasticity and cloud-economics.”

UKCloud’s GPU-accelerated cloud service, branded as Cloud GPU, is available in two versions: Compute and Visualization. Both are based on NVIDIA GPUs and initially available only on UKCloud’s Enterprise Compute Cloud platform. They will be made available on UKCloud’s other platforms at a later date. The two versions are as follows:

  • UKCloud’s Cloud GPU Compute: This is a GPU accelerated computing service, based on the NVIDIA Tesla P100 GPU and supports applications developed using NVIDIA CUDA, that enables parallel co-processing on both the CPU and GPU. Typical use cases include looking for cures, trends and research findings in medicine along with genomic sequencing, data mining and analytics in social engineering, and trend identification and predictive analytics in business or financial modelling and other applications of AI and deep learning. Available from today with all VM sizes, Cloud GPU Compute will represent an additional cost of £1.90 per GPU per hour on top of the cost of the VM.
  • UKCloud’s Cloud GPU Visualisation: This is a virtual GPU (vGPU) service, utilizing the NVIDIA Tesla M60, that extends the power of NVIDIA GPU technology to virtual desktops and apps. In addition to powering remote workspaces, typical use cases include military training simulations and satellite image analysis in defense, medical imaging and complex image rendering. Available from the end of October with all VM sizes, Cloud GPU Visualization will represent an additional cost of £0.38 per vGPU per hour on top of the cost of the VM.

UKCloud has also received a top accolade from NVIDIA, that of ‘2017 Best Newcomer’ in the EMEA partner awards that were announced at NVIDIA’s October GPU Technology Conference 2017 in Munich. UKCloud was commended for making GPU technology more accessible for the UK public sector. As the first European Cloud Service Provider with NVIDIA GPU Accelerated Computing, UKCloud is helping to accelerate the adoption of Artificial Intelligence across all areas of the public sector, from central and local government to defence and healthcare, by allowing its customers and partners to harness the awesome power of GPU compute, without having to build specific rigs.

Send your Azure alerts to ITSM tools using Action Groups

The content below is taken from the original (Send your Azure alerts to ITSM tools using Action Groups), to continue reading please visit the site. Remember to respect the Author & Copyright.

At Ignite 2017, we announced the new IT Service Management (ITSM) Action in Azure Action Groups. As you might know, Action Groups is a reusable notification grouping for Azure alerts. Users can create an action group with functions such as sending an email or SMS, as well as calling a webhook and re-use it across multiple alerts. The new ITSM Action will allow users to create a work item in the connected ITSM tool when an alert is fired.

ITSM Connector Solution in Log Analytics

This action builds on top of the IT Service Management Connector Solution in Azure Log Analytics. The ITSM Connector solution provides a bi-directional connection with the ITSM tool of your choice. Currently the solution is in public preview and supports connections with ITSM tools such as System Center Service Manager, ServiceNow, Provance, and Cherwell. Today, through the ITSM Action, we are bringing the same integration capabilities to Azure alerts.

The IT Service Management Connector allows you to:

  • Create work items (incidents, alerts, and events) in the connected ITSM tool when a Log Analytics alert fires, or manually from a Log Analytics log record.
  • Combine the power of help desk data, such as incidents and change requests, and log data, such as activity and diagnostic logs, performance, and configuration changes, to mitigate incidents quickly.
  • Derive insights from incidents using the Azure Log Analytics platform.

Using the new ITSM Action

Before you can start using the ITSM Action, you will need to install and configure the IT Service Management Connector Solution in Log Analytics. Once you have the solution configured, you can follow the steps below to use the ITSM Action.

1. In Azure portal, click on Monitor.

2. In the left pane, click on Action groups.

Action groups

3. Provide Name and ShortName for your action group. Select the Resource Group and Subscription where you want your action group to get created.

Add action group

4. In the Actions list, select ITSM from the drop-down for Action Type. Provide a Name for the action and click on Edit details.

5. Select the Subscription where your Log Analytics workspace is located. Select the Connection (i.e your ITSM Connector name) followed by your Workspace name. For example, "MyITSMMConnector(MyWorkspace)."

ITSM Ticket

6. Select Work Item type from the drop-down.

7. Choose to use an existing template or complete the fields required by your ITSM product.

8. Click OK

When creating/editing an Azure alert rule, use an Action Group which has an ITSM Action. When the alert triggers, a work item is created in the ITSM tool.

Note: Currently only Activity Log Alerts support the ITSM Action. For other Azure alerts, this action is triggered but no work item will be created.

We hope you will find this feature useful in integrating your alerting and Service Desk solutions. Learn more and get information on IT Service Management Connector Solution and Action Groups.

We would love to hear your feedback. Send us any questions or feedback to [email protected]

User Group Newsletter – September 2017

The content below is taken from the original (User Group Newsletter – September 2017), to continue reading please visit the site. Remember to respect the Author & Copyright.

Sydney Summit news

Let’s get excited! The Sydney Summit is getting close, and is less than 30 days away!!

Get all your important information in this Summit guide. It includes suggestions about where to stay, featured speakers, a Summit timeline and much more.

The schedule is LIVE! Plan it via the website or on the go with the Summit app. Stuck on where to start with your schedule? See the best of what the Sydney Summit has to offer with this Superuser article.

A SUPER IMPORTANT NOTE REGARDING TRAVEL!!

*All* non-Australian residents will need a visa to travel to Australia (including United States citizens). Click here for more information

Forum

The Forum is where  (users and developers) gather to brainstorm the requirements for the next release, gather feedback on the past version and have strategic discussions that go beyond just one release cycle.

The Forum schedule brainstorming is well underway! Check out the link below for key dates.

You can read more about the Forum here.

#HacktheStack – Cloud App Hackathon

Join us for Australia’s first OpenStack Application Hackathon Nov 3-5, 2017 at Doltone House in Australia’s Technology park, the weekend prior to the OpenStack Summit Sydney.

This 3-day event is organized by the local OpenStack community and welcomes students and professionals to hack the stack using the most popular open infrastructure platforms and application orchestration tools such as OpenShift (Kubernetes and Docker container orchestrator), Cloudify (TOSCA and Docker app orchestrator) and Agave (Science-as-a-Service gateway), in addition to the premier Open Source Infrastructure-as-a-Service: OpenStack!

There are great opportunities to get involved! You can sign up as a participant or share your expertise as a mentor. There are also some fantastic sponsorship opportunities available.

Click for more information here

 

2018 Summit news – Save the dates!

Back by popular demand, Vancouver is our first Summit destination in 2018. Mark your calendar for May 21-24, 2018.

Our second summit for 2018 will be heading to Berlin! Save the dates for November 13-15th!

New User Groups

We welcome our newest User Groups!

Looking for your local user group or want to start one in your area? Head to the groups portal.

OpenDev

Did you miss OpenDev? Read a great event summary here.

Catch up on all the talks with the event videos here.

 

OpenStack Days  

OpenStack Days bring together hundreds of IT executives, cloud operators and technology providers to discuss cloud computing and learn about OpenStack. The regional events are organized and hosted annually by local OpenStack user groups and companies in the ecosystem, and are often one or two-day events with keynotes, breakout sessions and even workshops. It’s a great opportunity to hear directly from prominent OpenStack leaders, learn from user stories, network and get plugged into your local community.

See when and where the upcoming OpenStack Days are happening.

 

OpenStack Marketing Portal

There is some fantastic OpenStack Foundation content available on the Marketing Portal.

This includes materials like:

  • OpenStack 101 slide deck
  • 2017 OpenStack Highlights & Stats presentation
  • Collateral for events (Sticker and T-Shirt designs)

Latest from Superuser

How to install the OpenStack Horizon Dashboard

How to get more involved in OpenStack

How to make your Summit talk a big success

How to deliver network services at the edge

Kickstart your OpenStack skills with an Outreachy Internship

 

Have you got a story for Superuser? Write to editor[at]openstack.org.

 

On a sad note…a farewell

Last week, an extremely valued member of our community, Tom Fifield, announced his departure as Community Manager from the OpenStack Foundation.

Tom, we thank you for your amazing industrious efforts over the last five years! Your work has contributed to culminating in the healthy community we have today, across more than 160 countries, where users and developers collaborate to make clouds better for the work that matters.

Thank you Tom!!

Read his full announcement here

 

Contributing to the User Group Newsletter.

If you’d like to contribute a news item for next edition, please submit to this etherpad.

Items submitted may be edited down for length, style and suitability.

 

 

 

 

 

 

 

We created a night light with Alexa skill that doesn’t talk. Literally does what a night light should do. Let us know what you think!

The content below is taken from the original (We created a night light with Alexa skill that doesn’t talk. Literally does what a night light should do. Let us know what you think!), to continue reading please visit the site. Remember to respect the Author & Copyright.

http://amzn.to/2g351Kb

Casio’s ‘2.5D’ printer can mimic leather and fabric

The content below is taken from the original (Casio’s ‘2.5D’ printer can mimic leather and fabric), to continue reading please visit the site. Remember to respect the Author & Copyright.

It’s safe to say that we’re all familiar with the term "3D printing" by now, but "2.5D printing?" As silly as it sounds, this may be a game changer for all the industrial designers out there. At CEATEC, Casio demoed this Mofrel printing technology that adds a range of textures to ordinary-looking sheets, before giving them the final touch with a 16-million-color inkjet.

The printed samples looked and felt surprisingly convincing with a great level of detail — down to the uneven surfaces plus puffiness of leather, the subtle bumps on stitches and even the coarseness of embroidered fabrics (especially for kimono designs). Hard materials like wood, stone, brick and ceramic can also be mimicked, though some of these may require additional coating for hardness or shininess.

The secret behind this trick lies within Casio’s "digital sheets." These appear to be slightly thicker sheets of paper, but in fact, they contain a layer of micro powder between the inkjet layer and the paper or PET substrate. Each powder particle consists of a liquid hydrocarbon coated with a thermoplastic resin (acrylonitrile), and such combination expands when exposed to heat, then the structure is retained when heat is removed, thus leaving behind the mimicked texture on the sheet.

To better control the texture formation, the texture pattern is first printed onto the sheet’s top microfilm using carbon, then these infrared-absorbing carbon particles focus the heat onto the desired areas of the micro powder layer. According to Casio Digital Art Division’s Executive Officer Hideaki Terada, the sheet’s expansion is currently capped at 1.7 mm thick for the sake of stability, but 2mm to 2.5mm is also technically possible albeit with difficulty. The microfilm is then peeled off so that colors can be printed onto the textured surface i.e. the inkjet layer.

The entire process takes around three to five minutes for a single-sided A4 "digital sheet," with each sheet costing around $10 (Terada might have been referring to the PET-based version for this). This may seem steep compared to ordinary sheets of paper, but considering the vast range of textures that this technology can simulate, it’s actually a lot cheaper — and faster — than prototyping with the real materials. This is a dream come true for pretty much all sorts of designers. The printer also supports A3 sheets, and you can get double-sided sheets for both sizes (A4 would take about nine minutes to process), though prices for these are unknown.

As for the Mofrel printer, the current version costs around a whopping five million yen or about $44,400, and it’ll be available as a B2B solution some time next year. That said, I was told that some top automobile makers as well as electronics companies got early access to Mofrel, and they are already using it for research and development. It goes without saying that this price point is a bit too much for us mere mortals, but Terada hinted that his team is already prototyping a consumer version, though we’re still one to 1.5 years away from its debut.

Source: Casio

One Night, the boutique last-minute hotel app, is expanding internationally to London

The content below is taken from the original (One Night, the boutique last-minute hotel app, is expanding internationally to London), to continue reading please visit the site. Remember to respect the Author & Copyright.

One Night, the last-minute booking app for boutique hotels, is expanding internationally – starting today with London.

One Night was created by Standard International, which is the parent company of Standard Hotels. Originally the company launched an app called One Night Standard, which was a way to get great deals on same-day bookings (starting at 3pm) at Standard properties. But after seeing demand from other boutique hotels looking for a similar offering, the company launched One Night, which now offers rooms at properties in 10 U.S cities plus London.

While the general premise is similar to incumbent Hotel Tonight, One Night puts a greater emphasis on making sure only highly-curated trendy hotels make the cut. For example, in London One Night will have rooms available at The Ned, Soho House’s brand-new trendy stand-alone hotel.

While The Standard’s presence in London makes it a logical first step in terms of international expansion, Amar Lalvani, CEO of Standard International, explained that the team is already “looking towards other key markets throughout Europe”.

The app also has some cool features like hour-by-hour city guides that suggest activities near your hotel that you can be doing each hour. Features like this will be especially helpful as the app continues to expand internationally, as U.S travelers spending time abroad are always looking for better activity and restaurant suggestion apps.

One Night has seen strong growth since it launched in just NY and LA a little over a year ago. Since June average daily bookings grew 331%, and active users as a percentage of total downloads (i.e how many downloaders are actually using the app) grew to 48%, which is nearly double what it was last year.

“Dear Boss, I want to attend OpenStack Summit Sydney”

The content below is taken from the original (“Dear Boss, I want to attend OpenStack Summit Sydney”), to continue reading please visit the site. Remember to respect the Author & Copyright.

Want to attend the OpenStack Summit Sydney but need help with the right words for getting your trip approved? While we won’t write the whole thing for you, here’s a template to get you going. It’s up to you to decide how the Summit will help your team, but with free workshops and trainings, technical sessions, strategy talks and the opportunity to meet thousands of likeminded Stackers, we don’t think you’ll have a hard time finding an answer.

 

Dear [Boss],

I would like to attend the OpenStack Summit in Sydney, November 6-8, 2017. The OpenStack Summit is the largest conference for the open source cloud platform OpenStack, and the only one where I can get free OpenStack training, learn how to contribute code upstream to the project, and meet with other users to learn how they’ve been using OpenStack in production. The Summit is an opportunity for me to bring back knowledge about [Why you want to attend! What are you hoping to learn? What would benefit your team?] and share it with our team, while helping us get to know similar OpenStack-minded teams from around the world.

Companies like Commonwealth Bank, Tencent and American Airlines will be presenting, and technical sessions will demonstrate how teams are integrating other open source projects, like Kubernetes with OpenStack, to optimize their infrastructure. I’ll also be able to give Project Teams feedback about OpenStack so our user needs can be incorporated into upcoming software releases.

You can browse past Summit content at openstack.org/videos to see a sample of the conference talks.

The OpenStack Summit is the opportunity for me to expand my OpenStack knowledge, network and skills. Thanks for considering my request.

[Your Name]

 

Learn more about the Summit and register at http://bit.ly/2fVrRDy

textify (1.6.1)

The content below is taken from the original (textify (1.6.1)), to continue reading please visit the site. Remember to respect the Author & Copyright.

Copy text content from dialog boxes.

Microsoft Launcher offers ‘Continue on PC’ option for Android phones

The content below is taken from the original (Microsoft Launcher offers ‘Continue on PC’ option for Android phones), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft is taking a step to make bridging your life between your phone and PC a little easier, at least for Android users. Today they announced Microsoft Launcher, which gives you the option to "Continue on PC," allowing you to work seamlessly between your Android phone and Windows computer. It’s similar to the Handoff feature on Apple products.

Android phones allow users to customize what’s called a "launcher" that appears when you press the Home button — a feature that iPhone doesn’t have. Microsoft Launcher, which is based on Fluent design, allows you to further customize what you see. In addition to "Continue on PC," you can place icons of your favorite people on your home screen. It also offers The Feed, where you can find your most-used apps, recent news, important events and more, all tailored to your needs. You can access The Feed by swiping right.

Microsoft Launcher also offers full customization capabilities. Not only do you have the ability to change backgrounds, but it offers "gesture" support to make you as productive as possible. The app is available in Preview for now, and it’s open to any Android user. If you’re on the Arrow Launcher beta, you’ll automatically get the Microsoft Launcher update. Support for Continue on PC will arrive with the Windows 10 Fall Creators Update.

Via: The Verge

Source: Microsoft

Microsoft is bringing its Edge browser to Android and iOS

The content below is taken from the original (Microsoft is bringing its Edge browser to Android and iOS), to continue reading please visit the site. Remember to respect the Author & Copyright.

While Microsoft is still officially working on the mobile version of Windows 10, it’s no secret that the company has all but given up on building its own mobile ecosystem. That only leaves Microsoft with one option: concede defeat and bring its applications to the likes of Android and iOS. That’s exactly what the company has been doing for the last few years and today the company announced that its Edge browser (the successor to the much — and often justly — maligned Internet Explorer) will soon come to iOS and Android, too. The company is also graduating its Arrow Launcher for Android and renaming it to Microsoft Launcher.

Even though Microsoft basically doesn’t play in the mobile OS and hardware space anymore, it still needs to have a presence on rival platforms if it doesn’t want to risk losing its relevancy on the desktop, too. Edge and the Microsoft Launcher are both key to this strategy because they’ll help the company to extend the Microsoft Graph even further. The Graph is Microsoft’s cross-platform system for allowing you to sync the state of your work and documents across devices and the company sees it as key to the future of Windows.

It’s no surprise then that this new version of Edge promises to make it easier to connect your PC and mobile device, with easy syncing of your browser sessions and other features.

For now, though, Edge for iOS and Android remain previews that you can sign up for here. The Android version will be available as a beta in the Google Play store soon and the iOS version will be made available through Testflight in the near future, too.

It’s worth noting that Microsoft won’t bring its own rendering engine to these platforms. Instead, it’ll rely on WebKit on iOS and the Blink engine on Android (and not the Android WebView control). On Android, this means that Microsoft is now actually shipping its own version of the Blink engine inside its app — and that’s not something we expected to hear anytime soon.

As for the launcher, it’s worth noting that it’s actually a quite capable Android launcher that nicely integrates with all of the Google apps you probably use every day (calendar, Gmail, etc.). Microsoft’s version of the Google Feed, that left-most homescreen on your Android device, is actually quite useful, too, and puts your calendar and other info front and center whereas Google now uses it for a general news feed.

Alexa is about to disappear into other devices, thanks to a new technology

The content below is taken from the original (Alexa is about to disappear into other devices, thanks to a new technology), to continue reading please visit the site. Remember to respect the Author & Copyright.

Today, the reason you have an Amazon Alexa sitting on your coffee table or by a couch is because it needs to be able to hear you clearly when you speak. Too near the TV and it’ll go ordering strange things, whenever someone, like South Park’s Cartman, says “Alexa!”.

What is required is the kind of sophisticated technology that can listen out for a human voice while ignoring all the other noise around. It’s literally harder than it sounds.

Now, a British firm is about to become the world’s first to offer this technology and incorporate the Amazon Alexa Voice Service.

XMOS, which a month ago closed a $15M funding round led by Infineon, has become the first European chip company to release a qualified Amazon Alexa Voice Service (AVS) development kit. It will also be the first company in the world with an AVS-qualified “far-field linear mic array”.

This technology combines the radar normally used in cars with microphones. This means that you can take your voice-enabled speaker off your coffee table and put it back where it belongs – against a wall, discretely integrated into other kit. This means Alex would simply disappear and be discretely integrated it into something else.

XMOS has 50 employees, making it the smallest company to achieve AVS qualification. Its competitors include Synaptics which has a $1.27Bn market cap, and 1800 employees; Microsemi ($5.7Bn), with 4400 employees; and Cirrus Logic ($3.28Bn), and 1100 employees.

While there are competing solutions, XMOS has the first far-field linear array to support Alexa, meaning Alexa could melt into the background.

The linear array is the first Alexa qualified solution for things that have flat panels or sit against the wall – like about 90% of the technology in your home.

It means a future of accessing voice services via devices which are accessible and inconspicuous.

Smart mattress startup Eight shows off its Alexa integration

The content below is taken from the original (Smart mattress startup Eight shows off its Alexa integration), to continue reading please visit the site. Remember to respect the Author & Copyright.

Sleep tech startup Eight has always aimed to connect its products (first a mattress cover, then an actual smart mattress) to the broader smart home ecosystem.

Most recently, it launched a new Alexa skill that allows Eight owners to use the Echo and other Amazon devices to interact with their mattress, both to control it (say by adjusting the temperature) and to access their sleep data.

We visited the Eight office in New York to see the Alexa integration in action, and to talk to co-founder and CEO Matteo Franceschetti about his vision for the company.

“You don’t even need to touch your phone — you can just ask Alexa how you slept,” Franceschetti said. “Or you can ask Alexa to start warming your bed and Alexa will do it automatically for you.”

You can see more of our interview and brief demo in the video above. Pricing for Eight’s lineup for smart mattresses starts at $699.

Featured Image: Eight

Understanding Floating Point Numbers

The content below is taken from the original (Understanding Floating Point Numbers), to continue reading please visit the site. Remember to respect the Author & Copyright.

People learn in different ways, but sometimes the establishment fixates on explaining a concept in one way. If that’s not your way you might be out of luck. If you have trouble internalizing floating point number representations, the Internet is your friend. [Fabian Sanglard] (author of Game Engine Black Book: Wolfenstein 3D) didn’t like the traditional presentation of floating point numbers, so he decided to explain them a different way.

Instead of thinking of an exponent and a mantissa — the traditional terms — [Fabian] calls the exponent as a “window” that determines the range of the number between two powers of two. So the window could be from 1 to 2 or from 1 024 to 2048 or from 32768 to 65536.

Once you’ve determined the window, the mantissa — [Fabian] calls that the offset — divides the window range into 8,388,608 pieces, assuming a 32-bit float. Just like an 8-bit PWM value uses 128 for 50%, the offset (or mantissa) would be 4,194,304 if the value was halfway into the window.

There are a few details glossed over — the bias in the exponent and the assumed digit in the mantissa are in the provided formulas, but the reason for them isn’t as clearly spelled out as it would be for the “classic” explanation. If you want a go at the traditional classroom lecture on the topic, there’s one below.

We’ve talked about floating point representations and their effect on missiles. There was a time when you hated to use floating point because it was so expensive in either dollars or CPU time, but these days even a solder controller can do relatively fast math with floats.

Filed under: Software Development, software hacks

Five more reasons why you should download the Azure mobile app

The content below is taken from the original (Five more reasons why you should download the Azure mobile app), to continue reading please visit the site. Remember to respect the Author & Copyright.

This post was co-authored by Ilse Terrazas Ortega, Program Manager, Azure mobile app

You may have already heard about the Azure mobile app at the Build conference back in May 2017. The app lets you stay connected with Azure even when you are on the go. You can read more details in our launch blog post from May.
Over the last few months, we have been working closely with our customers to improve the Azure mobile app. And today, we are excited to share five more reasons why the Azure app is a must-have.

1. Monitoring resources

The Azure mobile app allows you to quickly check your resources status at a glance. Drill in, and see more details like metrics, Activity Log, properties and execute actions.

1 Resource list

2. Executing scripts to respond to issues

Need to urgently execute your get-out-of-trouble script? You can use Bash and now even PowerShell on Cloud Shell to take full control of your Azure resources. All of your scripts are stored on CloudDrive to use across the app and the portal.

2 Bash  2.1 PowerShell

3. Organizing resources and resource group

Have a lot of resources? No problem, you can favorite your most important resources across subscriptions and keep them in your Favorites tab for easy access.
 

3.2 Favorites tab

Start creating your Favorites list now – you can do it from the resource view or directly from the resources list tab as shown below.

3 Favorite on resource

3.1 Favorite on list

4. Resource sharing

Tired of sending screenshots to your coworkers to help them find a resource? Now you can share a direct link to the resource via email, text message or other apps with the click of a button.
 

4 Share

5. Tracking Azure Health incidents

The Azure mobile app can even help you track Azure Health incidents. Just scan the QR code from the portal and track the incident from your phone.
 

5 QR code portal

 

5.1 Health Event

Download the preview app today and let us know what you’d like to see next in the feedback forum. Keep an eye out for updates and follow @AzureApp on Twitter for the  latest news.

Azure Analysis Services adds firewall support

The content below is taken from the original (Azure Analysis Services adds firewall support), to continue reading please visit the site. Remember to respect the Author & Copyright.

We are pleased to introduce firewall support for Azure Analysis Services. By using the new firewall feature, customers can lock down their Azure Analysis Services (Azure AS) servers to accept network traffic only from desired sources. Firewall settings are available in the Azure Portal in the Azure AS server properties. A preconfigured rule called “Allow access from Power BI” is enabled by default so that customers can continue to use their Power BI dashboards and reports without friction. Permission from an Analysis Services admin is required to enable and configure the firewall feature.

Azure Analysis Services firewall support

Any client computers can be granted access by adding a list of IPv4 addresses and an IPv4 address range to the firewall settings. It is also possible to configure the firewall programmatically by using a Resource Manager template along with Azure PowerShell, Azure Command Line Interface (CLI), Azure portal, or the Resource Manager REST API. Forthcoming articles on the Microsoft Azure blog will provide detailed information on how to configure the Azure Analysis Services firewall.

Submit your own ideas for features on our feedback forum and learn more about Azure Analysis Services.

A mini version of the Commodore 64 is coming in 2018

The content below is taken from the original (A mini version of the Commodore 64 is coming in 2018), to continue reading please visit the site. Remember to respect the Author & Copyright.

It’s hard to deny the popularity of Nintendo’s retro mini systems. After all, demand far outstripped supply for the mini version of the original console, and the same is expected to happen for today’s SNES release. It’s not a surprise, then, that other companies are getting in on the action. Retro Games is launching a mini version of the 1982 computer Commodore 64 called the C64 Mini. It will be available in early 2018, with a price point of $70.

The C64 Mini, which is half the size of the original version, will come with 64 preinstalled licensed games, including California Games, Speedball 2: Brutal Deluxe and Paradroid. You can find a more complete list of games on their website.

It comes with a wired joystick, a charging cable and will connect to your TV via an HDMI port, but you can also use any standard PC USB keyboard to interface with it. The C64 Mini will have a save game capability and filtering options such as CRT, pixel perfect and scanline emulation. You can upgrade your console’s firmware via a USB flash drive. Retro Games is also planning a full-sized version of the C64 for late 2018.

Via: EuroGamer

Source: Retro Games

trid (2.24.20171001)

The content below is taken from the original (trid (2.24.20171001)), to continue reading please visit the site. Remember to respect the Author & Copyright.

TrID is an utility designed to identify file types from their binary signatures

Fintech startup Curve partners with accounting software Xero to make filing expenses ‘frictionless’

The content below is taken from the original (Fintech startup Curve partners with accounting software Xero to make filing expenses ‘frictionless’), to continue reading please visit the site. Remember to respect the Author & Copyright.

Curve, the London fintech startup that lets you consolidate all your bank cards into a single card and track your spending, has partnered with accounting software Xero to remove much of the friction involved in filing expenses. The move is part of the newly-launched ‘Curve Connect’ feature that will see Curve connect to a growing list of third-party apps and services to make managing your money easier.

Specifically, the Xero feature gives you the option to connect the Curve app to Xero so that spending on your Curve card (and therefore any of the underlying cards you’ve linked Curve to) can be automatically added to the accounting software without the need to enter each expense manually.

Receipts you’ve captured via the Curve app are included too, and — coupled with the fact that Curve also controls the entry that appears on your bank statement(s) — expenses can also be reconciled in a more automated fashion.

In a call with with Curve founder and CEO Shachar Bialick, he said that the new Xero integration makes filing expenses “frictionless,” which is quite a big deal as data entry and transaction reconciliation is something bookkeepers, accountants and business owners spend far too much time on. He conceded, however, that there are other solutions on the market that are also attempting to solve this problem, such as Accel-backed fintech startup Soldo, but reckons none offer the flexibility of Curve.

That’s because Curve is agnostic regarding the underlying bank account you choose to spend from, which in turn significantly broadens the number of bank accounts that are able to directly feed transaction data into Xero. He also said the integration with Xero is deeper than other third-party support because Xero made its internal rather than public API available to Curve, the same API used by the Xero app itself.

Meanwhile, as well as making Curve that bit more useful for sole traders, freelancers and other business owners, the Xero feature should also help the startup with user acquisition. It currently counts over 75,000 sign-ups, with Curve cardholders having spent almost £70 million in over 100 countries, while Xero itself has over a million subscribers, many of whom will find utility in Curve. It may also pan out that accountants themselves encourage clients to use Curve since doing so would makes their lives easier, too.

BYOD might be a hipster honeypot but it’s rarely worth the extra hassle

The content below is taken from the original (BYOD might be a hipster honeypot but it’s rarely worth the extra hassle), to continue reading please visit the site. Remember to respect the Author & Copyright.

BYOD might be a hipster honeypot but it’s rarely worth the extra hassle

Security, compatibility, control… we enter another world of pain

Hipster with laptop photo via Shutterstock

It’s a headache controlling what this guy can or can’t do with his own kit at work

I have a confession: I’ve fallen out of love with Bring Your Own Device.

Over the years, I’ve worked with, and administered, a number of BYOD schemes. I’ve even written positive things about BYOD.

After all, what was not to love? Users providing the mobile equipment and the company not needing to worry about maintaining the kit while at the same time treating them like company property, being able to manage device and content securely.

Just four years ago, Gartner reckoned by 2017 half of employers would be leaning on staff to supply their own smartphones or tablets. Somehow, this would let us deliver all kinds of business apps at the touch of a screen. Things like self-service HR or mobile CRM.

BYOD was the most “radical change to the economics and the culture of client computing in business in decades,” Gartner reckoned. Among the benefits were said to be new mobile workforce opportunities, increased employee satisfaction and – ahem – reducing or avoiding costs.

Some ludicrous statements started being made: BYOD had become a critical plank in attracting millennials – a generation addicted to mobiles and social media – to your place of work. If you didn’t have a BYOD programme and the competition did, well, guess where that potential new, hire wearing the chin thatch and lumberjack shirt would choose to work.

And after all that I’ve come out at the end asking why on earth would anyone bother?

The kit belongs to the user

On the face of it, users owning the kit is a great idea. When they sign up to the scheme they’re agreeing that the equipment is their responsibility. It’s up to them to have a warranty that’ll get it fixed if it breaks. If it doesn’t work, that’s their problem. Well worth the price we paid to help fund the kit.

Except it doesn’t work like that. Unless they’ve paid for a stonkingly expensive maintenance contract their kit will likely be on a collect-and-repair scheme, which means that if it exudes blue smoke (or simply goes silent on them) they’re without it for a few days while the vendor wrangles with it, bangs it with a hammer, and so on. So what do they do in the meantime? At the very least you’ll need to have a small stock of spare kit to help out users whose kit has turned up its toes… and of course that kit will be unfamiliar to them, won’t have their favourite applications, and so on.

And even when the equipment is alive, this doesn’t mean it won’t get sick once in a while. Even my own kit has a bit of a hiccup sometimes… refuses to acknowledge the Wi-Fi network, decides it doesn’t know how to access the networked fileshare that it was perfectly happy with yesterday, and so on. And your service desk will have the same problem from time to time – a user with a piece of kit the service desk staff don’t know very well (if at all) which is behaving oddly and takes an inordinate amount of effort to support.

And if you’re thinking: “Ah, that isn’t my problem – it’s up to the user”… well, can you really justify making the user figure out their own issues? They may not even be able to diagnose a problem without help from your teams. In reality then, it doesn’t work.

So even if you’ve saved money by contributing to the purchase of a BYOD device instead of buying a corporate system, you may be starting to uncover costs you weren’t anticipating.

Connecting it to the network

The next question is how you give the users connectivity into your systems. Connecting stuff you don’t own into the corporate network is a security nightmare – you absolutely don’t want to hook it in directly, because one outdated anti-malware package can wreak havoc with your world. So you have a number of options.

First is the concept of a “quarantine” VLAN. The idea’s simple: when anything accesses the network for the first time in a session, the infrastructure puts it in a VLAN that can’t see much – generally it can’t see anything but the internet and a server that deals with network admission. The admission server won’t let the device join the proper LAN unless it’s convinced that the device’s OS is up-to-date with patches, that it’s running a suitable anti-malware package, and that the latter is also current with regard to its patches and virus signature files. Now, although it’s a simple idea it’s also relatively complex to implement and has a non-trivial cost: so unless your BYOD world is extensive, it may not be worth it.

An alternative is to decide that anything BYOD needs to stay outside the network completely, and act simply as a dumb terminal to the corporate system. You generally achieve this using some kind of virtual desktop à la Citrix or VMware. Again this is non-trivial and not cheap: it needs hardware, software, knowledge and maintenance. Getting the kit to talk to the network is non-trivial too, then.

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Migrating the Smart Way, with Azure

The content below is taken from the original (Migrating the Smart Way, with Azure), to continue reading please visit the site. Remember to respect the Author & Copyright.

how to Migrate with Azure

The cloud era is gaining greater speed, and market projections look very healthy for years to come. This growth and the enthusiasm it’s creating for cloud technology is pushing organizations who, until very recently, were resistant to start their own cloud initiatives. This post is about how your organization can migrate the smart way with Azure cloud. 
Unfortunately, many firms think of a cloud-migration effort as just another IT project (similar to what was previously done to install new hardware and software). As a result, they are making costly missteps for three key reasons:
1. Lack of training
2. Not using cloud architecture best practices
3.  Not taking cost control and monitoring seriously from the start
Consider the following scenario:
Susan is the IT director for Spry Maneuvers, a successful, mid-level logistics firm. SpryM specializes in moving difficult-to-ship freight (such as power plant gas turbines) across the globe.
SpryM is heavily dependent upon a complex database system that sprawls across a large number of virtual machines (VMs) hosted on-premise:
The business is growing, so there is always a need for more storage capacity, more computing power, and greater performance from VSphere’s production and development environments.
Susan is an experienced and knowledgeable manager, and she knows how to address this need through budgeting, hardware provisioning, software licensing, patching, and all the other items required to engineer and maintain a busy data center.
Recently, however, the CIO gave Susan a new directive: For the coming fiscal year, there will be no budget for further on-premises data center investment. Every new initiative must be built on Azure and every existing asset must be moved there too, ASAP.
Under pressure, Susan and her team decided to lift and shift the database farm to Azure by building VMs and simply moving the workloads onto them, just as they’ve done countless times before within their data center:

This seemed easy enough, and after a period of adjustment to the Azure portal interface, SpryM’s IT team began building more and more VMs and moving workloads to over-provisioned F-series virtual machines.
Everything seemed fine until the bill (for tens of thousands for only a few weeks of run time) came due and end-users began complaining about performance issues.
The CIO wondered:

  • Wasn’t running things in Azure supposed to be cheaper?
  • Wasn’t the cloud supposed to be faster and more efficient?
  • How could my team let me down?

SpryM made several critical (and all too common) mistakes from the start:
1. They assumed that the cloud was nothing more than an extension of their data center, as currently configured.
2. Staff requests for training (from those who recognized the challenge) fell on deaf ears—too expensive not necessary.
3. The cloud project was too ambitious. There was no modest proof of concept in which a low impact workload was relocated to Azure to help IT gain experience and confidence
4. No one monitored costs.
How could they have done things better?

Training

Cloud Academy offers excellent courses on Azure such as Architecting Microsoft Azure Solutions–70-534 Certification Preparation.
This learning path covers the following key areas:

  • Designing Azure Resource Manager (ARM) networking
  • Securing resources
  • Designing an application storage and data access strategy
  • Designing advanced applications
  • Designing Azure Web and Mobile Apps
  • Designing a management, monitoring, and business continuity strategy
  • Architecting an Azure Compute infrastructure

These are the skills that SpryM’s IT team needed to make the right choices.  In addition to courseware for individuals, Cloud Academy offers team-optimized training that could have put SpryM on the correct path.

Azure Solutions Architecture

Microsoft provides a wealth of guidance to help you intelligently and effectively apply the platform’s offerings to your needs.  This should be one of the first places you visit and examine when you’re building with Azure:

Azure Pricing Calculator

SpryM’s IT team was unprepared for the high runtime cost of the type of VM that they chose. This embarrassing surprise was avoidable—they just needed to use the Azure Pricing Calculator:

Using the Pricing Calculator, they would’ve seen that, when compared to running SQL Server on an always-on virtual machine, using an Azure SQL database is very cost effective.
As you move workloads to Azure—whether you eagerly volunteered or were pushed by management—keep the combination of training, cloud-native solutions, and pricing foremost in mind and you’ll greatly increase your odds for success.
[icegram messages="21583"]

Watch: Early celebration allows Dan McLay to clinch Belgian race at the last possible moment

The content below is taken from the original (Watch: Early celebration allows Dan McLay to clinch Belgian race at the last possible moment), to continue reading please visit the site. Remember to respect the Author & Copyright.

Anthony Turgis caught just metres from the line

Dan McLay (Fortuneo-Oscaro) claimed just his second victory of the 2017 season, as he won the Tour de l’Eurométropole in an extraordinary final sprint.

An attacking finale had seen Anthony Turgis (Cofidis) escape from the small leading pack shortly before the flamme rouge, taking a decent lead into the finishing straight.

For a while it looked as if the Frenchman would hold on to take the win, but in the final hundred metres he began to seriously tie up, as McLay led the sprinters from behind.

>>> Chris Froome reveals the extraordinary nocturnal training regime that he tried on his way to success

Still, with 25 metres remaining and still in front, Turgis thought he had the win in the bag, raising his arms in celebration…before realising his mistake.

Turgis hadn’t bargained on the speed that McLay was coming from behind, and the Brit was able to pass the Cofidis rider at the last possible moment, with Kenny Dehaes (Wanty-Groupe Gobert) also coming passed to push Turgis down into third.

Those of you with good memories will know that McLay has previous when it comes to producing memorable sprint finishes, winning the 2015 GP de Denain in a mesmerising sprint that saw him weave his way through a densely packed field of riders in the final 150m.

Having picked up just his second victory of the year – his first having come way back in January at the Trofeo Palma -the 25-year-old will now conclude his 2017 season at Paris-Bourges on Thursday.

This may also be McLay’s final race with the Fortuneo-Oscaro team, with his contract coming to an end this season and the team expected to take more of a GC focus in 2018 with the arrival of Warren Barguil.

Local, LocalNow and Roaming folders in AppData on Windows 10 explained

The content below is taken from the original (Local, LocalNow and Roaming folders in AppData on Windows 10 explained), to continue reading please visit the site. Remember to respect the Author & Copyright.

Windows 10 AppData folder includes the following sub-folders – Roaming, Local & LocalNow. This post explains what they are and their functions.

Almost every program you install on your Windows 10 PC creates its own folder in the AppData folder and stores all its related information there. AppData or Application data is a hidden folder in Windows 10 that helps protect user data and settings from deletion and manipulation. To access it, one has to select “Show hidden files and folders” in the folder options.

One can directly paste the following in Windows File Explorer and hit Enter to open it:

C:\Users\<username>\AppData

Local, LocalNow and Roaming folders

When you open the AppData folder, you will see three folders:

  1. Local
  2. LocalLow
  3. Roaming.

If a program wants to have a single set of settings or files to be used by multiple users, then it should use the ProgramData folder – but if it wants to store separate folders for each user, programs should use the AppData folder.

Let us see what are Local, LocalNow and Roaming folders and what are their functions.

Local, LocalNow & Roaming folders

Each of these folders has been created by Microsoft intentionally for the following reasons:

  • Better performance during log in
  • Segregation of application’s data based on the usage level.

Local folder

The Local folder mainly contains folders related to installing programs. The data contained in it (%localappdata%) cannot be moved with your user profile since it is specific to a PC and therefore too large to sync with a server. For example, Internet Explorer temporary files are stored under Temporary Internet files or the Cookies folder. Also, there is a folder of Microsoft where you can find the history of Windows activities.

LocalLow folder

This LocalLow folder contains data that can’t move. Besides, it also has a lower level of access. For example, if you’re running a web browser in a protected or safe mode, the app will only access data from the LocalLow folder. Moreover, the LocalLow folder is not created on the second computer. Therefore, any applications that access the LocalLow folder may fail.

Roaming folder

The Roaming folder is a type of folder that can be readily synchronized with a server. Its data can move with user’s profile from PC to PC — like when you’re on a domain you can easily log into any computer and access its favorites, documents, etc. For instance, if you sign into a different PC on a domain, your web browser favorites or bookmarks will be available. This is one of the main advantages of Roaming profile in a company. The user profile data (copy to server), the custom data is always available regardless of any system the employee uses.

In short:

ProgramData folder contains global application data that is not user-specific and is available to all users on the computer. Any global data is put in here.

AppData folder contains user-specific preferences and profile configurations and is further divided into three subfolders:

  1. Roaming folder contains data that can move with the user profile from a computer to computer
  2. Local folder contains data that cannot move with your user profile.
  3. LocalLow folder includes lowlevel access sata, eg. temporary files of your browser when running in a protected mode.

Hope this helps.

Google to let anyone add to Street View, starting with Insta360’s Pro camera

The content below is taken from the original (Google to let anyone add to Street View, starting with Insta360’s Pro camera), to continue reading please visit the site. Remember to respect the Author & Copyright.

Google has a new program called “Street View ready” which will make it possible for anyone with the right hardware to contribute to its Street View imaging database, typically assembled using Google’s official 360-degree camera-toting Street View cars. The first camera officially designated ‘Street View ready’ is Insta360’s Pro camera, the 8K 360 camera which captures still images at up to 5 frames per second, and which has real-time image stabilization built-in.

Google will make it possible to control the Insta360 Pro from directly within the Street View app, and will also be allowing device to capture photos and videos and upload them from the official Insta360 Stitcher software. The POro’s 5 fps 8K shooting mode is a new feature being added to the camera via software update tailor-made for capturing Street View content, and a new USB hardware accessory will also be shipping from Insta360 to attach GPS data to captured imaging data automatically.

This sounds like a very cool way to let adventurous individuals contribute to the Google Street View imagery database, and it’ll help Google cover territory not necessarily easily reached by its own teams, including terrain accessible to specific organizations who want to document it for research purposes. Google has done limited contributions with third-parties in the past, including Fore Islanders with its “Sheep View” project, but this could cast a far wider net – provided, of course, contributors are willing to pony up for the expensive Insta360 Pro hardware.

The camera retails for $3,499, and it’s the only hardware currently ‘Street View ready’-certified by Google. But Google will also be making it available as a loaner to qualified individuals and organizations, which should put it more within reach.