Dual SIM Hack For Single SIM Slot Phones.

The content below is taken from the original (Dual SIM Hack For Single SIM Slot Phones.), to continue reading please visit the site. Remember to respect the Author & Copyright.

Dual SIM Hack For Single SIM Slot Phones.

[RoyTecTips] shows us an ingenious hack which turns a single-SIM-slot phone into a fully functioning dual-SIM phone. All that needed for this hack is a heat-gun, solvent, micro SD card, nano SIM and some glue. The trick is that the phone has a SIM reader on the backside of an SD-card slot. Through some detailed dissection and reconstruction work, you can piggy-back the SIM on the SD card and have them both work at the same time.

Making the SD/SIM Franken-card is no picnic. First you start by filing away the raised bottom edge of the micro SD card and file down the side until the writing is no longer visible. Next get a heat gun and blast your nano SIM card until the plastic melts away. Then mark where the SIM card’s brains go and glue it on. Turn the phone on then, hey presto, you now have a dual SIM phone while keeping your SD storage.

This hack is reported to work on many Samsung phones that end in “7” and some that end in “5”, along with some 8-series phones from Huawei and Oppo clones of the Samsungs. Since you’re only modifying the SIM card, it’s a fairly low-risk hack for a phone. Combining two cards into one is certainly a neat trick, almost as neat as shoe-horning a microcontroller into an SD card. We wonder how long it will be before we see commercial dual SIM/SD cards on the market.

KFC Winged Aircraft Actually Flies

The content below is taken from the original (KFC Winged Aircraft Actually Flies), to continue reading please visit the site. Remember to respect the Author & Copyright.

KFC Winged Aircraft Actually Flys

[PeterSripol] has made an RC model airplane but instead of using normal wings he decided to try getting it to fly  using some KFC chicken buckets instead. Two KFC buckets in the place of wings were attached to a motor which spins the buckets up to speed. With a little help from the Magnus effect this creates lift.

Many different configurations were tried to get this contraption off the ground. They eventually settled on a dual prop setup, each spinning counter to each other for forward momentum. This helped to negate the gyroscopic effect of the spinning buckets producing the lift. After many failed build-then-fly attempts they finally got it in the air. It works, albeit not to well, but it did fly and was controllable. Perhaps with a few more adjustments and a bit of trial and error someone could build a really unique RC plane using this concept.

Apparently Time IS Money

The content below is taken from the original (Apparently Time IS Money), to continue reading please visit the site. Remember to respect the Author & Copyright.

Some people like to tweak cars. Some like to overclock PCs. Then there are the guys like [Jack Zimmermann] who are obsessed with accurate time. He’s working on a project that will deploy NTP (Network Time Protocol) servers in different African countries and needed small, cheap, energy-efficient, and accurate servers. What he wound up with is a very accurate setup for around $200. Along the way, he built some custom hardware, and hacked a computer to sync to the GPS clock reference.

His original attempt was with a Raspberry Pi 3. However, the network adapter isn’t the fastest possible, both because it is 100 MBPS and, primarily, because it is connected via the USB bus. Network latency due to these limitations makes it difficult to serve accurate time.

His solution includes an Odroid C2. For $50 it is a very capable computer with four cores, gigabit Ethernet, and can even use eMMC storage which is faster than the usual SD card. You can still use a conventional SD card, though, if you prefer.

For a time reference, [Jack] used a Trimble GPSDO (GPS-disciplined oscillator) that outputs a PPS (pulse per second) and two 10 MHz signals. These are locked to the GPS satellite clocks which are very accurate and [Jack] says the timing is accurate to within less than 50 ns. Unfortunately, the pulse from the Trimble board is too short to read, so he designed a pulse stretching circuit.

Instead of trying to discipline the existing clock on the Odroid to the GPS reference, [Jack] removed the crystal and associated components completely. He then used a frequency generator chip to convert the 10 MHz GPS signal to the 24 MHz clock the Odroid expects. He has plans to use the extra outputs from the chip to drive the ethernet and USB clocks, too, although their absolute accuracy is probably not that critical.

We’ve seen NTP clocks before that can consume this kind of time reference. If you want to know more about the Odroid C2, we talked about them at length last year.

Filed under: Network Hacks

Google has taught an AI to doodle

The content below is taken from the original (Google has taught an AI to doodle), to continue reading please visit the site. Remember to respect the Author & Copyright.

Hot on the heels of the company’s art and music generation program, Project Magenta, a pair of Google researchers have taught a neural network to sketch simple drawings all on its own.

The researchers relied on data from Quick, Draw!, an app that asks users to draw a simple item and then guesses at what it could be. They used 75 classes of item (say owls, mosquitos, gardens or axes), each of which contained 70,000 individual examples, to teach their recurrent neural network (RNN) to draw them itself in vector format.

"We train our model on a dataset of hand-drawn sketches, each represented as a sequence of motor actions controlling a pen: which direction to move, when to lift the pen up, and when to stop drawing," David Ha, one of the researchers, wrote in a recent blog post. The team also added noise to the data so that the model can’t directly copy the image, "but instead must learn to capture the essence of the sketch as a noisy latent vector."

That is, the AI isn’t simply throwing together bits and pieces of images from its memorized dataset, it actually learned how to draw these objects. To prove this, the researchers presented a model that had been taught to draw pigs with a number of purposefully incorrect inputs. "When presented with an eight-legged pig, the model generates a similar pig with only four legs," Ha wrote. "If we feed a truck into the pig-drawing model, we get a pig that looks a bit like the truck."

What’s more, the AI can merge a pair of disassociated images to create a series of uniquely intersecting hybrids. It’s the same basic idea as the pig-truck above but able to produce a large number of similar but unique designs. This feature could be of great use to advertisers, graphic designers and textile manufacturers once fully developed. Ha also figures that it could be used as a learning aid for people teaching themselves to draw.

Via: Venture Beat

Source: Google

Amazon offers its voice-recognition smarts to other companies

The content below is taken from the original (Amazon offers its voice-recognition smarts to other companies), to continue reading please visit the site. Remember to respect the Author & Copyright.

Amazon’s Alexa has become the flag-bearer for AI assistants. Not only does she possess an exhaustive list of useful skills, but she’s also started finding new homes in everything from phones to cars, watches, little robots and even refrigerators. There’s a reason Amazon’s Echo and Echo Dot speakers are particularly suited for ordering Alexa around at home, though. They both feature a fancy far-field, seven-microphone setup and audio processing smarts that help Alexa understand your muffled commands shouted from the downstairs bathroom. Today, Amazon’s announced it’s releasing this mixture of hardware and software in a new development kit, so other companies can build Alexa prisons that recognize you want to add mixed spices to your shopping list, and not listen to a Spice Girls mix (liar).

The seven microphones take care of omnidirectional listening, while Amazon’s proprietary noise reduction, echo cancellation and wake-word recognition software turns your mumbling into intelligible commands. For reference, Google Home uses just two microphones, while Lenovo’s Smart Assistant (with Alexa) features eight. Amazon’s development kit isn’t available to workshop hobbyists, instead being reserved for "commercial device manufacturers through an exclusive, invite-only program." Still, between this and Intel’s smart speaker reference design, companies pretty much have everything they need to quickly develop their own Echo-like hardware and give consumers even more ways to ask Alexa to play that Spice Girls mix already.

Source: Amazon

Internet VPN or MPLS for branch office IP phone communication?

The content below is taken from the original (Internet VPN or MPLS for branch office IP phone communication?), to continue reading please visit the site. Remember to respect the Author & Copyright.

Many businesses with branch offices that have IP-enabled phones must decide what type of circuit medium to use for their communication to the corporate headquarters site.

Two of the most selected choices are a MPLS circuit or internet VPN. Both solutions have their pros and cons, and what is best can depend on your business requirements. Speed, quality of service (QoS), security and cost are the key factors you should consider when making this decision.

Pros and cons of an internet VPN

A significant advantage of using an internet VPN for communication is the cost. Most times, a branch site can use its existing internet connection for communication back to its headquarters. Usually, a 10 Mbps internet circuit costs much less than a 10 Mbps MPLS circuit. This can encourage a business to purchase more bandwidth for their branch site.

More providers offer internet connections than MPLS service across the United States. Cable modem providers and wireless internet providers can also be used for internet connectivity for branch offices; this enables it to be more widely available across various locations. Also, IPsec and SSL VPNs are very secure if configured properly and if the devices establishing the VPN are up to date with their patches.

The major disadvantage of using an internet VPN is the performance. Internet traffic has higher latency and jitter than an MPLS circuit. You are susceptible to more jitter because of routing path changes between the various carriers across the internet. This can also decrease the voice quality.

Further, QoS configuration options are limited with an internet VPN because over the internet you would have multiple carrier networks, and QoS markings can be a challenge or impossible to configure with multiple carriers. You can only configure QoS across the first one or two hops for the equipment you manage. After that, it is all best-effort communication. Due to the internet being a best-effort medium, the service provider cannot offer a service-level agreement (SLA) that guarantees a specific performance for your traffic.

Pros and cons of MPLS

A beneficial option of MPLS is that you can place your data into different categories, and each one can have a specific priority and your carrier will honor these end to end. Since voice traffic is almost always prioritized over other traffic, this will lead to better performance for the voice traffic.

Most organizations classify their traffic so that real-time voice and video services are given a much higher preference during times of congestion. This is because of the sensitive nature of voice and video traffic during times of high traffic utilization.

MPLS service providers offer SLA for your traffic and can reimburse you when they do not meet the requirements. Also, MPLS networks are easier to troubleshoot because it can involve only one carrier’s network. Another benefit of having only one carrier is that it can help reduce jitter and delay. Voice quality is very sensitive to jitter, and since MPLS provides a more consistent hop path, you do not need to worry about the frequent routing and switching changes that occur on the internet.

MPLS carrier traffic is tagged and segmented from other users on the specific carrier’s network. This makes MPLS a secure medium. For additional security, some organizations will decide to encrypt their traffic across it.

One significant disadvantage of MPLS is its cost. It is more expensive than standard internet connections—sometimes four to five times the cost of an internet circuit—and many managers have trouble justifying this additional cost.

Another disadvantage is MPLS is not available at all locations. It can be a challenge to get the circuits installed at certain remote locations. 

Deciding between MPLS and internet VPN

Since internet circuits can cause voice signals a variety of quality issues, it is best not to use an internet VPN for branch sites that will have call center services or have employees who require the best quality for voice communication. For those sites, it is best to pay the additional cost and run it across an MPLS network with end-to-end QoS. I’ve even monitored stable IP voice communication for long periods of time on MPLS networks without QoS configured on the circuit, but this was on circuits that had less than 50 percent average utilization.

If the employees at the remote branch site don’t use their phone services on a regular basis and consistent quality is not critical, it may be worth it to run it across a high-bandwidth internet circuit and save the additional cost. I’ve observed IP voice communication across the internet to be tolerable with only minor interruptions throughout the day. If you decide to use an internet VPN, it is important to notify end users there could be issues with voice quality.

The key point before deciding whether to select internet VPN or MPLS is to perform a cost-benefit analysis and to make sure your end users at the remote and corporate sites provide input for the analysis. It’s crucial to get the end users’ perspective to find out how much they plan to use their IP phone and how critical is it to receive consistent quality. If the branch staff will communicate with customers, it is important to get their input on the need for consistent voice quality.

When performing the analysis, make sure each stakeholder rates how much they value good voice quality. A low rating on the need for voice quality may encourage the use of an internet VPN, and a high rating should support the use of MPLS.

Also, obtain multiple quotes for internet and MPLS services so cost differences can be compared.

Ensuring all stakeholders take part in the process will help you make the correct decision that is based on not only cost, but also user input.

Join the Network World communities on

Facebook

and

LinkedIn

to comment on topics that are top of mind.

Char.gy taps into lampposts to charge your electric car

The content below is taken from the original (Char.gy taps into lampposts to charge your electric car), to continue reading please visit the site. Remember to respect the Author & Copyright.

If you have a garage with a power socket, an electric car makes an awful lot of sense. If you park on the street, however, the infrastructural challenge of keeping your electron-powered vehicle topped up becomes complicated enough that perhaps sticking to driving on squished dinosaurs makes sense for a while longer. Until Char.gy comes along, that is.

“Seventy-two percent of drivers in London don’t have off-street parking,” says Richard Stobart, CEO of the London-based Char.gy. “If you want an electric car, not being able to charge at home is a major disincentive.”

So, in a world where people want to drive electric cars, cities are trying to clean up the air and car makers want to sell electric cars, how do you take on the not insignificant challenge of charging car batteries in a dense and fast-moving city such as London? You tap into other, already existing infrastructure, of course.

“People want to charge their cars while they are doing something else, preferably when they are parked at home and asleep in their beds,” Stobart points out, and offers a solution. The company has developed charge points that connect to the existing street furniture: Lampposts. Makes sense: The cables are already there, the local government owns them, and once you’ve gone that far, you may as well make the next couple of logical steps.

“Our vision is that there will be several lampposts near your home with a charge-point that you can park near to charge overnight,” Richard explains. “We are developing a platform to manage the charging too, using a business model not dissimilar from mobile phone contracts. Users pay a monthly fee for access to the charge points and a free allocation of charge.”

The early-stage company is running a pilot program with Richmond Council in London, installing a number of units in Barnes and Kew, with the first charging points appearing later this summer.

“We are trying to create a win-win solution for everyone, not least local government,” says Stobart, addressing part of the challenge of building a marketplace with multiple players. “We are making it seamless and effectively free for the councils and solving the problem of half of all public charge points being out of service.”

The 10 most common myths for Amazon Web Services

The content below is taken from the original (The 10 most common myths for Amazon Web Services), to continue reading please visit the site. Remember to respect the Author & Copyright.

Most common myths for Amazon Web Services

From stories about having no control over data to out of control costs, there are many myths in circulation about cloud computing and AWS. Whether you’re a professional working in the cloud or a business considering cloud migration, you’ll want to be able to distinguish the myths from the realities. Today, we’ll separate the fact from fiction in some of the most common myths in the world of Amazon Web Services.

The genesis myth

It’s a common misconception that AWS was born as a byproduct of a massive Amazon.com infrastructure. The story goes something like this: To support its growth, Amazon.com developed an infrastructure that was more than enough to meet its needs. To be able to make the most of their resources, Jeff Bezos and his team decided to rent the free space, and the rest is history.
Is it true? Well, not exactly. It’s true that Amazon.com needed an infrastructure that they could rely on, but the rest of the story is a myth. The infrastructure of Amazon Web Services was actually built intentionally, as a cloud service, from scratch.
Back in 2003, Benjamin Black was head of the engineering team at Amazon.com, and he was tasked with finding a way to efficiently scale up Amazon’s infrastructure. The trouble was that IT wasn’t keeping up with the company’s rapid growth. Black was working closely with his manager, Chris Pinkham, and together they decided to explore how abstraction and decoupling applications from the infrastructure could make it easier to manage. They supposed that such an approach would not only benefit Amazon but also other web services that were having the same problems. They realized that they could sell infrastructure as a product. After meeting with Jeff Bezos, who approved the project, Black and Pinkham pulled together a team that developed Elastic Compute Cloud (EC2), one of the first AWS products, released in 2006.
While the idea that such a massively successful business could be created by “accident” and without much planning, the reality conveys the opposite message: You can’t build your business without a clear plan for what you’re doing and where you’re headed.
Let’s look at some of the other common myths for Amazon Web Services.

Myth 1: Cloud will cost me a fortune

Have you calculated your on-premise setup costs? Don’t forget to add loses from not being able to handle high traffic or costs from maintaining massive infrastructure when you have little traffic on your website.
When you’re using the cloud, you can avoid such unnecessary expenses and pay only for the resources you are actually using. AWS applies a pay-as-you-go billing model. If your business scales up, you will pay for additional computing resources that are always available. On the other hand, in the next month, if your business is a bit slow and you consume fewer resources, your bill will be lower.
This type of system means that you don’t have the huge up-front costs and investments that you would when building your own infrastructure. It also means that there are no additional costs other than for the services that you actually use. When comparing these two figures, you will realize that you are actually saving money when using the cloud, and that it doesn’t cost you a fortune.

Myth 2: I can only use the cloud for storage

Not true. Yes, there are cloud storage providers such as Box, Dropbox, Microsoft Onedrive, or Google Drive, but we’re not talking about such services today. What we’re referring to are IaaS (Infrastructure as a Service) cloud systems from providers like AWS, Microsoft Azure, and Google Cloud. You can use them all for all kinds of things.
For example, Amazon Web Services is used by thousands of business, government, educational, and other organizations to host their websites, build different types of apps, harness and analyze big data, organize research projects, and perform all kinds of other activities, including information storage.
Cloud computing is much more than data storage, and it’s good to know that you can take advantage of the benefits of such a system at any time.

Myth 3: I have less control over my data with the cloud

It’s a common belief that cloud servers are in control of the service vendor and that there is not much that you can do about it. Your control is reduced to some basic setup while the “higher power” does everything else.
But, that’s actually not how things work.  With a cloud setup, you actually have full control over your data in real time. By using simple monitoring mechanisms, you can find out everything you need to know about your instances. For example, you can see who launched the instance, from where, how long it’s been running, as well as what applications are running on that particular instance and which data was used.

Myth 4: Moving data to the cloud is all or nothing

No, you don’t have to migrate all of your data to the cloud at once. That would be madness! Instead, every sensitive and massive task migration to the cloud should be carefully planned and executed in several phases. All you have to do is to establish the rhythm that best fits your team, and let them do the job.
If you need some help along the way, you can:

Myth 5: Anyone can access my cloud data

We could rephrase this myth as “who knows who else has access to my cloud data?” The fear of intrusion and data spying is inevitable, and it’s one of the biggest obstacles when someone considers using a cloud infrastructure. However, such claims and fears are completely unnecessary. Do you think that Netflix, NASA, Capital One, and Airbnb would be willing to put their entire business at risk if just anyone could gain access to their data stored in AWS?
When your data is stored in the cloud, you have the absolute power to decide who can have access to your setup. You are the one who determines who will have access, as well as the level of permission. Technicians who are working as AWS network admins don’t have access to your instances, and even when they are troubleshooting, they are quite limited in their access. Knowing that, you can rest assured that you’re the only boss of your cloud universe.

Myth 6: I’m not in control of where my data is located

Again, not true. There is only one boss when it comes to your cloud setup, and that is you. This also means that you have full control over the destination of your data.
AWS has server centers all over the world. You can choose the data center that you will be using when making the initial setup. If you decide to use a data center in Ireland, your data will be placed there, unless you move it. No one else will be able to do that.

Myth 7: It’s too hard to teach my team to use AWS

Introducing a new technology to a team isn’t always easy, but it’s certainly not impossible. Just like any other novelty, accepting cloud technology may not be immediate. But, as with any new technology, there will surely be a few enthusiasts and early adopters in your team who will advocate and support the change.
There are many resources available to help your team adopt new technology.  you can use a lot of resources that will help them learn about AWS (and even get some quite useful certificates along with the way) you can use:

Myth 8: My on-premise setup is more secure than the cloud

Are you sure that your setup has a higher level of security than the cloud? Think again. Safety is a top priority of AWS and other cloud vendors that are doing their best to make sure that the data on their servers stays secure and safe from any illegal cyber activities.
IaaS infrastructure is designed to be “bullet proof” and resistant to common web security issues such as SQL injections, XSS, CSRF, etc. An army of engineers is working to improve the security of your cloud-stored data on a daily basis. They are all creating a flexible, automated, and secure computing environment for you to use.
Now, compare an extensive and carefully built system that is finely tuned for 10+ years to your local on-premise setup. Even if you have the best and the most talented team, it is very likely that they might not be up to speed with the latest security updates or overlook something that could cause a problem sooner or later.

Myth 9: Cloud services don’t have all the certificates I need for my business

Another common misconception is that cloud services don’t have certification capacity and that they are off limits for companies that require specific standards and certificates. You would be surprised to hear that most of them are certified at an even higher level than you need.
You can use AWS Certificate Manager and get any certificate you need for your business. You can use this service to meet compliance requirements, minimize downtime of your services, and improve your search rankings. In addition to the standard SSL/TSL certificates, you may require specific certificates. For example, AWS has a program for government institutions, and it even complies with the FBI’s Criminal Justice Information Services Division (CJIS) standard. This is useful to know if you’re working with law enforcement agencies and have clients who require compliance.

Myth 10: AWS is only for large companies

AWS is a highly scalable system. It can support large and demanding systems such as Amazon.com, Netflix, or banking systems, and it also can be used for small businesses and startups. Did you know that AWS has a startup program called AWS Activate? With this program, your small business can leverage services whenever necessary, and while you’re still small, your costs will be minimal.
Some of the world’s hottest startups have used AWS Activate to grow. Did you know that Slack, Airbnb, and Pinterest all started on AWS? You can start here as well and grow your business from scratch, just as they did.

Myths busted!

As the famous MythBusters would say, “Busted!” Now that we’ve revealed the truth behind some of the most common myths related to AWS, it will be easier for you to become an AWS pro, or at least sound like one.

Office 365 Data Governance Framework Spans Multiple Workloads

The content below is taken from the original (Office 365 Data Governance Framework Spans Multiple Workloads), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft.com

Compliance and Regulations

Given the somewhat litigious nature of today’s business world, there is no surprise in the number of compliance features Microsoft builds into products like Office 365. In fact, the breadth and depth of those features is one reason why I think Office 365 is more popular with large enterprises than its major competitor, Google G Suite.

But good as the Office 365 compliance features are, gaps still exist. Yammer is an example of a product that has weak compliance functionality. Teams and Planner are others.

Keep What You Need and Get Rid of the Rest

Microsoft’s tag line for data governance is that “you keep what you need and get rid of what you don’t”. Last week, Microsoft made new functionality available through the Security and Compliance Center to help tenants keep content that they need and remove what they do not want to keep. The new functionality comes in the form of classification labels and retention policies, both of which combine to give tenants different options to control how long content exists in mailboxes, sites, and other Office 365 locations.

You create classification labels under the Classifications section of the Security and Compliance Center. When ready, you publish sets of labels in label policies, which then show up as retention policies under the Data Governance section. That seems a tad confusing, but it all comes together in the framework. Think of it this way: labels are the way to control content at a precise, item-specific level. Retention policies offer broad-brush coverage of content at volume. Together, the mixture of specific and general control affords tenants flexibility in how they build a data governance strategy for the organization.

Best of all, the new framework is designed to work across Office 365, including Office 365 Groups. It is a big step forward and is in line with other projects to offer cross-workload functionality in content searches and Data Loss Prevention (DLP).

Office 365 Retention Policies

Since their first appearance in Exchange 2010, retention policies have let administrators configure and apply policies to on-premises and cloud mailboxes to help users control items through a mixture of system-controlled tags and personal tags. Actions specified in the tags control how long items are kept in the mailbox and what happens once their retention period expires.

Retention policies work well for Exchange and Microsoft has gained a lot of experience in how customers use retention policies to manage content since 2010. All of which leads to the introduction of Office 365 retention policies to deal with Exchange (mailboxes and public folders), SharePoint, OneDrive for Business, Skype (IM conversations), and Office 365 Groups.

This is Microsoft’s second version of multi-workload retention as they launched preservation policies in 2015 to control content stored in Exchange mailboxes and SharePoint and OneDrive sites. Any preservation policies that exist in a tenant are automatically upgraded to become retention policies that keep but do not remove content after the retention period expires.

Expanding Retention Policies to deal with Multiple Office 365 Locations

To make retention policies available to other Office 365 workloads, Microsoft has evolved and expanded the core principles behind Exchange retention policies. In doing so, they have had to drop some Exchange-specific features, like the ability to move items to archive mailboxes.

Losing the ability to archive items automatically is regrettable (but only for Exchange). On the upside, retention policies incorporate the ability to set in-place holds so that users cannot permanently remove items if those items come within the scope of a policy. The simplest kind of policy puts every item in a mailbox or site on hold while more complex policies cover items that match queries or (for SharePoint and OneDrive for Business) or hold certain kind of sensitive data, like social security numbers or other “personally identifiable information” (PII). The same sensitive data types used in Data Loss Prevention policies are supported for retention.

It is all very flexible, and best of all, these policies implement the same processing across all the supported workloads. One policy to rule them all is so much better than having to configure multiple policies that work differently across different applications. The introduction of service-wide retention policies is yet another example of how Office 365 is fast leaving its roots of “cloudified” on-premises products behind.

Like Exchange Retention Policies but Better

Anyone who has ever worked with Exchange retention policies will find similarity with Office 365 retention policies. However, some significant differences exist:

Table 1: Comparing Exchange and Office 365 Retention Policies

Moving from Exchange Retention Policies

One inevitable question that arises is whether tenants should move from Exchange retention policies to Office 365 retention policies and classification labels. As obvious from Table 1, significant differences exist between the two types of retention policies, so the answer is unclear at this point.

Every tenant is different and although it might be easy for a cloud-only tenant with relatively simple retention needs to go ahead and embrace Office 365 retention policies, the situation is probably very different for large and complex tenants that already have a well-defined retention strategy in place. Things become even more complicated for hybrid tenants, who often want to use the same processes on-premises and in the cloud.

Experience and time will allow us to develop better answers. In the meantime, new tenants should start with Office 365 retention policies and classification labels while older tenants test, compare, and contemplate what is their best course of action. Microsoft says that they plan to keep the older workload-specific functionality available within Office 365 to allow organizations to make the transition. That is wise because the nature of retention is that items can be kept for a long time and no one wants to be forced into changing strategy in such a way that it might affect terabytes of retained content.

Classification Labels

Where retention policies handle the bulk-processing of content, Office 365 uses classification labels to mark content for specific treatment, like keeping certain documents for longer periods because they contain specially-valuable information.

Classification labels are published to the different applications to make them available to users, after which they can be applied to content. The way that classification labels appear in OWA is interesting because they are presented in the same way as personal retention tags. In other words, when Office 365 publishes classification labels to Exchange, clients pick up and use the labels like personal tags.

Figure 1 shows a mixture of classification labels and retention tags in the OWA UI. The retention tags appear because a retention policy applies to the mailbox, while the classification labels appear because a label policy includes the mailbox within its coverage. The user can select either a label or a tag to preserve an item, proving that Exchange retention policies co-exist peacefully alongside Office 365 classification labels.

OWA and Office 365 classification labels

Figure 1: OWA mixes classification labels with personal retention tags (image credit: Tony Redmond)

In Figure 2, we see how the same classification labels with the same retention settings are assigned to a document in a SharePoint library.

Office 365 Classification Labels - SharePoint

Figure 2: Classification Labels in use with SharePoint (image credit: Tony Redmond)

Only one label can exist on an item at any time. It is also possible to set a default label for a SharePoint site so that every item in the site inherits the classification.

Background processes make sure that the instructions contained in the label settings are respected. For instance, any item stamped with the “Archive Retention” label might be kept for 10 years and then removed from wherever it is stored.

Labels can also be used to mark items as “records.” This is a special status meaning that the item is needed for formal record-keeping and therefore cannot be changed or removed from Office 365 until its retention period expires.

Another interesting capability is when labels are auto-applied to content. To do this, you combine a label in a policy that is associated with a query or some sensitive data types. When the policy is published, Office 365 finds matching content and automatically applies the label. Users can overwrite an auto-applied label with a label of their choice.

Auto-apply label policies are a feature of the Office 365 E5 plan. It is a nice feature that helps tenants ensure that people do not make mistakes when they apply labels, but it is unlikely to be the stand-out reason why anyone upgrades to E5.

Rationalizing Labels within Office 365

“Label” is a generic term that is used elsewhere within Office 365 and Microsoft has some work to do to rationalize how they use the term. In the immediate future, we have to deal with three types:

  • Classifications are labels placed on Teams and Office 365 Groups give visual indicators to members about the sensitivity of the information belonging to the team or group but do not enforce any processing based on the classification.
  • Labels defined in Azure Information Protection can invoke processing to protect content stamped with the labels. For example, any content stamped with a label called “Most Sensitive” might lead to the automatic application of a rights management template to secure access to the content to users in a specific group.
  • Classification labels for Office 365 control the deletion and retention of content across multiple Office 365 locations, including the marking of items as formal records.

Creating a single uber-label that supports all the characteristics listed above will take some thought. A practical approach might be to have a single label whose capabilities are selectively enabled by licensing. The creation of such a label will need many contributions from different development groups. It will not happen overnight.

Sponsored

Groups and Compliance

Up to now, Office 365 Groups were a major problem area for compliance. Now, you can apply classification labels and retention policies to Office 365 Groups, but only those that use Exchange to store group conversations. Yammer-based groups are not yet supported.

Teams and Planner are still problematic, but at least we see progress. With that thought in mind, my next article will look at how to use retention policies with Office 365 Groups.

Follow Tony on Twitter @12Knocksinna.

Want to know more about how to manage Office 365? Find what you need to know in “Office 365 for IT Pros”, the most comprehensive eBook covering all aspects of Office 365. Available in PDF and EPUB formats (suitable for iBooks) or for Amazon Kindle

The post Office 365 Data Governance Framework Spans Multiple Workloads appeared first on Petri.

Exchange Retention Policies Office 365 Retention Policies
Apply to Exchange mailboxes (including shared mailboxes) Exchange mailboxes

Office 365 Groups

SharePoint document libraries

OneDrive for Business sites

Skype for Business IM

Public folders

** Microsoft says that Yammer and Planner will be supported soon.

Assignment Assigned to mailboxes (the default policy is assigned to all Exchange Online mailboxes) Policies are assigned to mailboxes and other locations, but locations can also be excluded from policies.
Composed of Each retention policy consists of a set of folder tags for specific system folders (like the Inbox), personal tags, and default tags. Three default tags can exist in a policy (for deletion, archive, and voicemail). Policies function like default tags in that the policy applies to all items in a location that are not otherwise tagged (for instance, with an Exchange personal tag or an Office 365 classification label).
Actions Move to Deleted Items

Permanently Delete

Move to Archive

Keep and then remove content

Keep and do nothing

Remove old content

Enforced by Managed Folder Assistant Managed Folder Assistant (for Exchange and Office 365 group mailboxes); other background processes service the other locations.

The classic Tamagotchi toy is back

The content below is taken from the original (The classic Tamagotchi toy is back), to continue reading please visit the site. Remember to respect the Author & Copyright.

You’ve already seen Nintendo revive the NES and Nokia reintroduce the 3310, so why not resurrect more ’90s tech? Bandai certainly doesn’t see a problem with it. The company has relaunched the classic Tamagotchi toy in near-original form to mark its 20th anniversary (November 1996 in Japan, May 1997 elsewhere). After years of constant iterations, you’re back to simple black-and-white displays and the six initial characters. About the only change is the size — these eggs are about half as large as the models you might have owned as a kid.

The one catch: as of right now, they’re only officially available in Japan for ¥2,000 (about $18) each. Unless you’re wiling to pay a premium to import them, you’re probably better off buying a locally available modern version and reminiscing about your youth. With that said: here’s hoping Bandai sees fit to bring its retro Tamagotchi to other countries in the near future.

Via: ShortList

Source: Amazon Japan

Deploy an Azure Network Watcher Instance — Preview

The content below is taken from the original (Deploy an Azure Network Watcher Instance — Preview), to continue reading please visit the site. Remember to respect the Author & Copyright.

I am going to show you how to enable the preview Network Watcher functionality in your Azure subscription. I am also going to show you how to deploy a Network Watcher, which is the new network monitoring solution in Azure. This is an instance in an Azure region.

 

 

Register the Provider

Network Watcher is still a preview feature and must be opted into on a per-subscription basis. If you want to try out or use Network Watcher, then you will need to enable the feature using PowerShell.

As usual with Azure PowerShell, you should do the following:

  1. Make sure you are running the latest version of the Azure PowerShell modules. Otherwise, strange errors might take place.
  2. Log into Azure Resource Manager using Login-AzureRMAccount.
  3. Select the appropriate subscription using Select-AzureRMSubscription.
Sponsored

You can start the registration process using the following two lines:

Register-AzureRmProviderFeature -FeatureName AllowNetworkWatcher -ProviderNamespace Microsoft.Network

Register-AzureRmResourceProvider -ProviderNamespace Microsoft.Network

The registration process can take a number of minutes. Go find something else to do. You can come back later and check progress by running:

Get-AzureRmProviderFeature -FeatureName AllowNetworkWatcher -ProviderNamespace  Microsoft.Network

Verifying the progress of the Network Watcher registration [Image Credit: Aidan Finn]

Verifying the Progress of the Network Watcher Registration [Image Credit: Aidan Finn]

The RegistrationState returned by the above command will change to “Registered” once the process completes successfully. At that point, you can move on to the next step.

Create a Network Watcher Instance

Log into the Azure Portal with your subscription administrator account. Browse to More Services > Network Watcher. The overview screen shows the current enablement status of Network Watcher for each enabled region in each of your subscriptions. At the time of writing this article, the preview release was only available in:

  • West US
  • North Central US
  • West Central US

The status of Network Watcher for each region in the subscription [Image Credit: Aidan Finn]

The Status of Network Watcher for Each Region in the Subscription [Image Credit: Aidan Finn]

To enable Network Watcher in a region, click the More button (…) for that region. After that, click Enable Network Watcher. This is a deployment that only takes a few seconds to complete. I enabled West US. A watcher resource called NetworkWatcher_westus was created in a resource group called NetworkWatcherRG. The names of the resource and resource group were automatically chosen by the deployment.

This resource is typical of all Azure resources. You can choose to enable role-based access control to limit access to Network Watcher.

Sponsored

You can return to the Overview to verify that the instance is enabled in the region. From there, you can continue to enable Network Watcher in each region.

The post Deploy an Azure Network Watcher Instance — Preview appeared first on Petri.

Existential Bug Reports

The content below is taken from the original (Existential Bug Reports), to continue reading please visit the site. Remember to respect the Author & Copyright.

ISSUE: If we wait long enough, eventually the Earth will be consumed by the Sun. WORKAROUND: None.

Drinkable Clouds Get You Second-Hand Drunk

The content below is taken from the original (Drinkable Clouds Get You Second-Hand Drunk), to continue reading please visit the site. Remember to respect the Author & Copyright.

Drinkable Clouds Get You Second-Hand Drunk

While the rise of electronic cigarettes and vaping has led to many aggravated bystanders, an installation in Germany may have found a vapor of a different ilk. Rather than nicotine, this cloud of vapors is full of tequila which precipitates out into glasses (or people) who happen to be nearby.

The cloud generator uses ultrasonic devices to vibrate the tequila molecules until they form a fine mist. The mist is delivered outward towards the sculpture, where a delicious cloud forms. From there, the cloud literally rains tequila out into its original, drinkable tequila form. It appears to take a while to gather enough tequila from the cloud, though, so there is a convenient tap on the side that will dispense it without all the rigmarole.

Basically this is a nebulizer which is using tequila and dispersing the output rather than directing it. You’re unlikely to get a large enough gasp for inebriation, but technically there is an opportunity a risk here of becoming second-hand drunk.

The installing is an effort by the Mexican Tourism Board to encourage Germans to take a break from the rain in favor of visiting sunny Mexico, we’d have to say that the effort seems to be a success. Once there, hopefully any visitors will be able to enjoy a perfect margarita or two as well.

Hackaday Prize Entry: WiFi In Wall Switches

The content below is taken from the original (Hackaday Prize Entry: WiFi In Wall Switches), to continue reading please visit the site. Remember to respect the Author & Copyright.

Hackaday Prize Entry: WiFi In Wall Switches

The Internet of Things and Home Automation are the next big thing, even though we’ve had X10 switches and controllers for forty years. Why the sudden interest in home automation? Cheap microcontrollers with WiFi, ZigBee, and Z-wave, apparently. For this Hackaday Prize entry, [Knudt] is building a WiFi switch, meant to be retrofitted into any Euro wall switch.

There are three parts of [Knudt]’s WiFi wall switch, each of them with different requirements. The top layer is the switch itself and a small OLED display. These switches are really two small capacitive switches, which means there’s no reason to go through the work of sourcing a proper mechanical switch. Good thinking, there. The second layer of this contraption is basically an ESP8266, providing all the logic for this wall switch. The bottom layer is a bit more interesting, housing the 110-230V input, with a Triac or relay. This is where the fun, burny stuff happens.

Right now, you can go down to your local home supply store and simply buy a device like this. History has shown that’s a terrible idea. With home automation cloud services shutting down and security vulnerabilities abound, a DIY or Open Source home automation project really is the best idea. That makes [Knudt]’s project a great entry for the Hackaday Prize.

Recommended Reading: iFixit wants to show you how to repair everything

The content below is taken from the original (Recommended Reading: iFixit wants to show you how to repair everything), to continue reading please visit the site. Remember to respect the Author & Copyright.

Meet the $21 Million
Company That Thinks
a New iPhone Is a
Total Waste of Money

David Whitford,
Inc.

We’re no stranger to iFixit’s in-depth teardowns here at Engadget, but the company has a plan that’s much more than ripping apart the latest gadgets to see what’s inside. Inc. takes a look at how the the company is helping the masses repair everything from smartphones to kitchen appliances and why they offer guides for doing so free of charge.

When Shazam Scoops Your Album Announcement
Marc Hogan, Pitchfork

Well, this is awkward.

Funny or Die at 10: An Oral History
Brian Raftery, Wired

Funny or Die carved out a unique spot when it comes to online comedy. Wired takes a look at the site’s history that began with a two-minute Will Ferrell sketch.

How Do You Beat the Smartphone Camera?
Rob Walker, Bloomberg

One tactic is enlisting a well-known industrial designer with a proven track record to work on your 16-lens point-and-shoot camera.

Peter Moore Talks Leaving Electronic Arts for Liverpool FC
John Davison, Glixel

Glixel chats with the former head of EA’s esports division who left to take the CEO chair at Liverpool FC about stepping away from games after a 19-year career.

Skylake takes flight on industrial EBX and Mini-ITX boards

The content below is taken from the original (Skylake takes flight on industrial EBX and Mini-ITX boards), to continue reading please visit the site. Remember to respect the Author & Copyright.

Perfectron announced a rugged, Linux-ready EBX SBC with Skylake-H Xeon and Core CPUs, plus an industrial Skylake-S Mini-ITX board. Perfectron, which recently announced a rugged, 3.5-inch OXY5361A SBC with Intel 6th Gen Core Skylake-U CPUs, has unveiled EBX and Mini-ITX boards with 6th Gen Skylake-H and Skylake-S chips, respectively. The rugged, EBX form-factor OXY5739A SBC […]

Solution guide: Migrating your dedicated game servers to Google Cloud Platform

The content below is taken from the original (Solution guide: Migrating your dedicated game servers to Google Cloud Platform), to continue reading please visit the site. Remember to respect the Author & Copyright.

By Joseph Holley, Cloud Solutions Architect, Gaming

One of the greatest challenges for game developers is to accurately predict how many players will attempt to get online at the game’s launch. Over-estimate, and risk overspending on hardware or rental commitments. Under-estimate, and players leave in frustration, never to return. Google Cloud can help you mitigate this risk while giving you access to the latest cloud technologies. Per-minute billing and automatically applied sustained use discounts can take the pain out of up-front capital outlays or trying to play catch-up while your player base shrinks.

The advantages for handling spikey launch-day demand are clear, but Google Cloud Platform’s extensive network of regions also puts servers near high-latency customers. Game studios no longer need to do an expensive datacenter buildout to offer a best-in-class game experience  just request Google Compute Engine resources where they’re needed, when they’re needed. With new regions coming online every year, you can add game servers near your players with a couple of clicks.

We recently published our "Dedicated Game Server Migration Guide" that outlines Google Cloud Platform’s (GCP) many advantages and differentiators for gaming workloads, and best practices for running these processes that we’ve learned working with leading studios and publishers. It covers the whole pipeline, from creating projects and getting your builds to the cloud, to distributing them to your VMs and running them, to deleting environments wholesale when they’re no longer needed. Running game servers in Google Cloud has never been easier.

This Porsche-based electric supercar has a 215-mile range and crazy acceleration

The content below is taken from the original (This Porsche-based electric supercar has a 215-mile range and crazy acceleration), to continue reading please visit the site. Remember to respect the Author & Copyright.


Austrian EV tuner Kriesel is making a name for itself in the world of high-performance electric conversions. After creating a purely electric Mercedes G-Class with Arnold Schwarzenegger, the company has now revealed the EVEX 910e, a converted Porsche 910 that has a fully electric drive train, max speeds north of 185 mph, a 0 to 60 time of under 2.5 seconds and more.

The electric supercar is street legal, and was developed in partnership with EVEX Fahrzeugbau GmbH, a car maker that focuses specifically on 1970s and 80s vintage sports cars. The car can be bought, in limited quantities, provided you have considerable resources: It’s priced at a lofty 1 million Euros.

Kriesel’s conversion gives the EVEX 910e 490HP via its electric motor, while producing zero emissions. It features a unique 2-speed transmission created by the Kriesel brother, which can also be purchased separately. The car’s max range of around 215 miles on battery power is described as “realistic” by the company, rather than guided by official measurement systems like the EPA standard. Rapid charging can quickly boost the power, and it’s been built so that when paired with a home solar installation, it can act as a battery and store power to feed back into the house.






I would very much like one of these, but I doubt I’ll get much support for a GoFundMe campaign to foot the bill.

Expect prices for PCs and mobile devices to rise this year

The content below is taken from the original (Expect prices for PCs and mobile devices to rise this year), to continue reading please visit the site. Remember to respect the Author & Copyright.

If you plan to buy a new PC or mobile device this year, you’ll likely be shelling out more cash than in previous years. Prices are going up, and expensive devices are in demand.

On average, the price of PCs and phones will go up by 2 percent this year, Gartner said in a research report released on Thursday. The calculations are based on U.S. dollars and average market sizes.

Breaking those numbers down, PC prices are expected to go up 1.4 percent this year, while mobile phone prices will go up 4.3 percent.

The prices will go up largely due to the rising prices of components. Also, more users are upgrading to more expensive and feature-rich mobile handsets.

The days of users preferring to buy the cheapest products are gone, said Ranjit Atwal, research director at Gartner.

Buyers are less price sensitive and are instead buying devices “that suit their lifestyles,” Atwal said.

Gartner’s forecast is in line with a projection in February by Lenovo’s chief operating officer, Gianfranco Lanci, who said PC prices would go up this year due to a shortage of DRAM, SSDs, batteries, and LCDs.

The cost of components like NAND flash have doubled since June last year, Gartner said.

The overall cost of purchasing components is going up. Moreover, millennials are willing to spend more on devices.

This year is expected to big for smartphones. Samsung has launched the Galaxy S8 smartphones, and Apple is expected to launch its 10th anniversary iPhone later this year. Premium-priced smartphones will go up by roughly 4 percent, Gartner said.

Android phones will suffer the most from the price increases. In emerging markets like China and India, Android phones are popular because of their affordability, but prices are also going up in those countries.

High-end Android smartphones offer more differentiation on features than generic low-end phones, giving a reason for buyers to spend a bit more to upgrade.

A good barometer for mobile phone pricing is the Chinese market. Global pricing of Chinese branded smartphones will go up to RMB 2,000 (US$290) by the end of this year from RMB 1,700 (US$246) at the end of last year, analyst firm Trendforce said last month. That’s partly because NAND flash supply is tightening.

According to Gartner, smartphone shipments worldwide this year will total 1.9 billion units, up from 1.89 billion last year.

The PC market has slowed down, and it is driven now by high-priced gaming PCs and 2-in-1s. Buyers of those PCs are willing to spend more money on their computers.

That trend is changing the types of computers shipped by PC makers, who are focused on selling higher-priced products that can deliver larger profit margins.

Low-end laptops and desktops will remain available, but PC makers like Dell and HP are slimming down those offerings. Low-cost laptops like Chromebooks typically have aging components, little storage, low-resolution webcams, and limited memory.

Gartner estimates 426 million computing devices, including PCs and tablets, will ship this year, dropping from 439 million last year. PC shipments will total 265 million this year, dropping from 270 million last year. Shipment of tablet devices like the iPad will total 161 million, dropping from 169 million last year, the analyst group predicted.

Join the Network World communities on

Facebook

and

LinkedIn

to comment on topics that are top of mind.

NVIDIA’s Titan Xp is the new king of graphics cards

The content below is taken from the original (NVIDIA’s Titan Xp is the new king of graphics cards), to continue reading please visit the site. Remember to respect the Author & Copyright.

Much to the consternation of last-gen Titan X owners, NVIDIA recently unveiled the GTX 1080 Ti, a GPU that offers more performance for nearly half the price. Luckily, rich gamers can get regain bragging rights by dropping another $1,200 on the Titan Xp, NVIDIA’s new top-of-the-line consumer GPU. It’s based on the Pascal GP102 chip, which also powers NVIDIA’s $5,000-plus Quadro P6000, so you can look at the Titan Xp as a relative bargain.

The new card uses 3,840 unlocked CUDA cores, besting the 3,584 in the 1080 Ti. It still packs 12GB of GDDR5X RAM, the same as the last model, but does so at 11.4GHz, up from 10GHz before. That helps it push pixels and voxels at 547.7 GB/s, up from 480GB/s, enough speed to let it handle Crysis as if it were Candy Crush.

The new card re-establishes Titan as NVIDIA’s flagship, pushing back that usurper, the GTX 1080i. However, the latter $699 card is still the one you want unless money is simply no object — it’s fairly easy to overclock it to the same performance level of the Titan Xp if you need that to feel better about yourself.

There’s another piece of good news from NVIDIA coming along with the new card. It’s about to launch beta OSX drivers for all 10-series cards, including the Titan Xp, GTX 1080 Ti, GTX 1070 and GTX 1060, finally giving Mac users access to NVIDIA’s latest products. That will be especially helpful for Adobe CC users on Mac, allowing for much faster CUDA GPU rendering. The NVIDIA Xp is now available on NVIDIA’s site for $1,200 (£1,179.00 in the UK), but is limited to two per customer.

Source: NVIDIA

How to Copy a Virtual Hard Disk in Microsoft Azure

The content below is taken from the original (How to Copy a Virtual Hard Disk in Microsoft Azure), to continue reading please visit the site. Remember to respect the Author & Copyright.

In today’s Ask the Admin, I will show you how to use the AZCopy tool to copy an Azure virtual machine (VM) to a virtual hard disk (VHD).

 

 

Azure VMs are automatically provided with a VHD when you provision them using the Azure management portal. For instance, if you choose to deploy Windows Server 2016 Datacenter, then the attached OS disk will contain the appropriate server image. But sometimes, you might want to use a custom disk image and attach it to a VM.

There are several ways you to get a custom disk image into Azure storage. You can upload an image from your local PC to Azure. You can also copy an existing VHD of a VM that has been generalized from an Azure storage account. That is what I am going to show you how to do in this article.

I will use the AZCopy tool to copy the OS disk of an existing Azure VM to a new container in the same storage account. To follow the instructions below, you will need an Azure subscription and at least one VM already provisioned. If you do not have an Azure subscription, you can get a free 30-day trial here. For more information on provisioning VMs in Azure, see Create a Virtual Machine in the Azure Cloud on the Petri IT Knowledgebase.

Prepare to Copy a VHD

Before copying a VHD in Azure, you will need to download and install the AzCopy tool on your local PC. You can download the tool for free here.

  • Log in to the Azure management portal. Make sure that the VHD you want to copy is attached to a VM. The VM should be in the Stopped (deallocated) state. To see a list of VMs, click Virtual machine in the options on the left.
  • If you want to copy a VM VHD, click the VM in the list and in the Virtual machine panel. Then, click Disks under SETTINGS.
  • On the left, you will see a list of disks attached to the VM. Click the disk that you want to copy.
  • At the bottom of the disk panel, the first part of the VHD URI shows the name of the storage account where the VHD is stored. In this example, the storage account is called atastor12. The name of the container is vhds.
  • Make a note of the disk file name. You will need it later.
Find out the storage account and container name of a VHD (Image Credit: Russell Smith)

Find Out the Storage Account and Container Name of a VHD (Image Credit: Russell Smith)

Get the Storage Account Access Key and Container URL

The AzCopy tool requires the storage account access key and container URL, which you can find in the Azure management portal.

  • Expand the list of options on the far left of the Azure management portal by clicking the hamburger icon in the top left corner.
  • Click More services at the bottom of the list and type storage in the search field.
  • Click Storage accounts in the list of results.
  • On the Storage accounts panel, click the storage account where the VHD you want to copy is located. In this case, atastor12.
  • On the Storage accounts pane, click Access keys under SETTINGS.
  • Copy the first access key by clicking the COPY icon on the left of the key. Store the key temporarily in Notepad.
  • Now click Containers under BLOB SERVICE.
  • On the right, click the container where the VHD you are going to copy is stored. In this example, vhds.
  • In the Container panel, click Properties.
  • In the Container properties panel, click the COPY icon to the right of the URL field. Paste the URL in Notepad. You will need it later. You will also need the storage account access key.
Get the container URL (Image Credit: Russell Smith)

Get the Container URL (Image Credit: Russell Smith)

Run AzCopy

Open a command prompt on your local PC in the C:\Program Files (x86)\Microsoft SDKs\Azure\AzCopy directory. The easiest way to do this is to locate the AzCopy directory in File Explorer. You will need to right click the directory, while holding SHIFT, and click Open command window here in the context menu.

Sponsored

In the command prompt window, run the AzCopy command as shown below. Replace <container URL> with the URL you pasted in Notepad, <dest container URL> with the URL for the destination container, <storage account key> with the storage key you pasted to Notepad from the Azure management portal, and <disk name> with the name of the VHD you want to copy from the source URL.

You must copy the VHD to a different container or a different storage account. To make things easy, I am copying to a different container in the same storage account. The storage account key is the same in /SourceKey and /DestKey.

AzCopy /Source:<source container URL> /Dest:<dest container URL> /SourceKey:<storage account key> /DestKey:<storage account key> /Pattern:<disk name>

In my example, the destination container does not exist in the storage account but AzCopy will create it. I just made up a name: vhdscloned. My command line looks something like this:

AzCopy /Source:http://bit.ly/2nltYTI /Dest:http://bit.ly/2o5w9aW /SourceKey:QbU90fErU9cCJ7xchQ== /DestKey:QbU90fErU9cCJ7xchQ== /Pattern:osdisk.vhd

Copy an Azure VM virtual hard disk using AzCopy (Image Credit: Russell Smith)

Copy an Azure VM Virtual Hard Disk Using AzCopy (Image Credit: Russell Smith)

When the copy operation has completed, click Refresh above the list of containers on the Storage account panel in the Azure management portal. The new destination container will appear in the list. Click the new container. You will see a copy of the VHD from the source container.

Sponsored

In this article, I showed you how to copy a VHD in Azure to a new storage account container.

The post How to Copy a Virtual Hard Disk in Microsoft Azure appeared first on Petri.

IBM Watson offers tech support that never sleeps

The content below is taken from the original (IBM Watson offers tech support that never sleeps), to continue reading please visit the site. Remember to respect the Author & Copyright.

If your company uses IBM’s helpdesk services, don’t be surprised if you find yourself talking to Watson next time you contact the IT department. IBM has added a Watson-powered concierge-like service to its helpdesk, and it can quickly solve your IT issues around the clock, wherever you are in the world and whatever device you’re using. Unlike automated bots, you can talk to Watson about your issues like you’re talking to another person. It can then customize its responses — for instance, it can use layman’s terms if you’re not that tech-savvy or use jargons if you are.

It’ll solve your problem on the spot if it’s something simple like adding storage to an email account, resetting a password or ordering a new company phone or computer. If it’s not trained to handle your problem, it’ll hand you over to a human IT personnel. However, Watson learns with every interaction and with every feedback it receives and doesn’t receive. (It notices if you’ve chosen not to answer the survey after each chat.) It draws from everything it learns, so after some time, transferring you to its human co-workers might become less and less frequent.

Richard Esposito, IBM’s general manager for GTS Mobility Services says:

"Today, governments and enterprises need to provide an effective set of capabilities to their workforce, so that their employees can deliver a superior interaction and experience for their citizens and consumers. We need a system that can understand and communicate in a natural language conversation, one that solves problems and continues to learn while engaging with employees. Our Workplace Support Services with Watson delivers this value."

Source: IBM

New settings available in Windows 10 v1703 Settings app

The content below is taken from the original (New settings available in Windows 10 v1703 Settings app), to continue reading please visit the site. Remember to respect the Author & Copyright.

New settings available in Windows 10 v1703 Settings app

Microsoft has recently launched Windows 10 Creators Update v1703. Although the Settings app was introduced in the initial release of Windows 10, the latest version has brought loads of new functionalities in the Settings panel. This article contains every new option & setting available in the new Windows 10 Settings app.

New settings available in Windows 10 Settings

New settings available in Windows 10 v1703 Settings app

System settings

Some features are removed from this System Settings panel of Settings. However, the following features are new in this latest version of Settings panel.

  • Shared experiences: This function helps users to open apps, send a message and on any other devices running the same Microsoft account. However, this is also possible to share apps and send messages to any nearby device.

Microsoft has removed Apps & Features, Default apps, Offline maps, etc.

Personalization settings

Although most of the options are same, a couple of new options are included:

  • Themes: Windows 7 like the Theme feature is back again in Windows 10 v1703. Microsoft removed that functionality from previous versions of Windows 10. However, now you can download your favorite themes from Windows Store. Apart from that, you see some options like Background, Color, Sounds, and Mouse cursor.
  • Start: One additional option called “Show app list in Start menu” has been added in this section. The other options are same.

Apps settings

This is a completely new category added in the Settings panel of Windows 10 Creators Update. Here you will see the following options:

  • Apps & Features: Here, you can manage app installation, block third-party apps from being installed, manage installed apps, etc.
  • Default apps: This section of Settings panel will let you manage default apps in Windows 10. You can change or assign your favorite app as your default file opener.
  • Offline Maps: Offline maps has been moved from “System” to “Apps” section of Settings panel.
  • Apps for websites: Another feature from “System” panel is moved to “Apps” section. You can manage the website that can be opened by an app here.

Gaming settings

This is another new category included in the Settings panel. These following features are new in this category.

  • Game bar: It helps users to record games, screenshots, and live stream the game.
  • Game DVR: It assists Windows 10 users to control how your machine is capturing your game.
  • Broadcasting: as the name suggests, this option helps users to control how your game appears when you live stream it.
  • Game Mode: this is one of the best options since it helps users to optimize your computer for a particular game so that you can get the best performance while playing.

Ease of Access settings

  • One new feature is included in the Narrator section called Braille. This is currently under development. It helps users to communicate with a braille display. However, it requires a third-party software that is available in the Windows Store. It does support more than 40 languages.
  • You can find Audio options in Other options. This helps users to enable to disable Mono audio.

Privacy settings

  • Now you can directly turn on speech services and typing suggestions. This option is now included in Speech, inking, & typing
  • Tasks: This is a whole new sub-category listed in the Settings panel. You can add tasks, manage apps which are using Tasks, and much more.
  • Feedback & diagnostics: In this section, you can find a new option called Diagnostic and usage data. You can manage what data and how much data is sent send to Microsoft to provide tailored experiences.
  • App diagnostics: It helps users to allow apps to control your diagnostic information.

Update & security settings

  • Troubleshoot: This option will aid you to fix common problems related to internet connections, audio, printer, Windows Update, Blue Screen, Bluetooth, hardware, HomeGroup, keyboard, power, network adapter, audio recording, etc.
  • Find my device: This is another new setting available in Windows 10 V1703 Settings app. You can find your lost device, and track a lost laptop and much more. Also, you can turn off this functionality.

These are new features included in Windows 10 v1703 Settings app. Microsoft has also changed some icons – like the Windows Update icon.

Stay tuned as we plan to cover each of these in detail in due course.



Anand Khanse is the Admin of TheWindowsClub.com, a 10-year Microsoft MVP Awardee in Windows (2006-16) & a Windows Insider MVP. Please read the entire post & the comments first, create a System Restore Point before making any changes to your system & be careful about any 3rd-party offers while installing freeware.

Climate change could make future flights a lot rougher

The content below is taken from the original (Climate change could make future flights a lot rougher), to continue reading please visit the site. Remember to respect the Author & Copyright.

Airplane rides could get extra bumpy in the future thanks to climate change. Turbulence could become two to three times more common because of it, according to a new study from the University of Reading.

The study, published in Advances in Atmospheric Sciences, is the first to investigate turbulence strength levels and how they’ll change in the future. Researchers found the average amount of light turbulence in the atmosphere will increase by 59 percent. Moderate turbulence will go up 94 percent, while severe turbulence will rise by 149 percent. The reason? Scientists believe higher carbon dioxide levels will create stronger vertical wind shears at aircraft cruising altitudes, which will make the shear instabilities that create clear-air turbulence more prevalent. CO2 levels are expected to double later this century.

"For most passengers, light turbulence is nothing more than an annoying inconvenience that reduces their comfort levels, but for nervous fliers even light turbulence can be distressing," said the study’s author, Dr. Paul Williams, in a press release. "However, even the most seasoned frequent fliers may be alarmed at the prospect of a 149 percent increase in severe turbulence, which frequently hospitalizes air travelers and flight attendants around the world."

Dr. Williams added that his top priority now is to investigate alternate flight routes. "We also need to investigate the altitude and seasonal dependence of the changes, and to analyze different climate models and warming scenarios to quantify the uncertainties," he said. But, Williams isn’t the only one tackling this problem. IBM, which bought The Weather Company for $2 billion, recently teamed up with Gogo Inc. to give pilots a heads up when turbulence happens, so they can adjust their flight paths accordingly.

Via: EurekAlert

Source: Advances in Atmospheric Sciences

Wisdom of crowds plus a splash of AI give Australia new national analytical map data

The content below is taken from the original (Wisdom of crowds plus a splash of AI give Australia new national analytical map data), to continue reading please visit the site. Remember to respect the Author & Copyright.

Wisdom of crowds plus a splash of AI give Australia new national analytical map data

It’s nice to know where a lake lies. It’s better if you know how fast rain can get into it

Geoscape layers

Geoscope’s layers will have GIS geeks swooning

Australia’s Public Sector Mapping Agency (PSMA) and US satellite constellation operator DigitalGlobe have joined together to come up with a whole-of-continent, high-resolution analytical data set.

Perhaps in need of a high-value product after its G-NAF (Geococded National Address File) was published for free a Data.gov.au, the PSMA has decided to “fill in” the white space in national mapping.

As Dan Paull, PSMA’s CEO, told Vulture South, a map that showed address points, property boundaries, road centrelines and rivers left out important analytical information.

“The buildings and the infrastructure – that represents a lot of investment”, he said, but there was “no means to conduct analysis at scale, across the whole country”.

Hence the new “Geoscope”: a GIS-ready database that “describes the whole of the built environment – footprints, height, roof materials, solar panels, swimming pools, information about vegetation in the area, land cover”, and even details like surface imperviousness – whether it’s road or rocks, bare earth or a body of water.

Imperviousness, Paull said, is a good example of how much better analysis can be with the Geoscope data set.

Even open source GIS tools have well-established libraries for predicting flood mapping, but they focus on how a catchment fills “from the bottom”, so to speak.

DigitalGlobe co-founder and CTO Walter Scott explained that “water doesn’t start at the lower level, it starts where it falls”.

In an urban environment, he explained extreme weather can lead to flood damage by water that’s coming downhill towards the catchment – because even half a meter of water moving at speed can do considerable damage to whatever lies in its path.

Understanding the surfaces on the path between rainfall and the bottom of the catchment therefore helps predict where, because of the volume of water, a property might be at risk.

“The major risks – fire risk, flood risk, wind risk – for all of these things, to answer the question about likelihood, you need information about the building, and its surroundings,” Paull said.

With that data filled in, “you can look at one building, or every building in the country, and understand the possibility of a tree falling”.

Scott said creating the Geoscope dataset was a combination of machine learning and crowd-sourcing.

Machine learning helped “identify building footprints, identify the land cover and the delineation of trees.”

Roof materials are machine-classified according to data collected by shortwave infrared satellite sensors, while multiple satellite images of buildings from different angles helped measure building height and roof pitch.

Crowd sourcing, Scott said, helped identify things like swimming pools and whether there are solar panels on roofs.

The first release of Geoscope covers Sydney, but by 2018, Paull said, the product will cover the whole eight million square kilometres of the Australian continent.

As with other high-value PSMA data sets, Geoscope will be offered through resellers, with end user licensing depending on what data a customer wants and services shipped with the data. ®