Remix OS Player is an Android Virtual Machine for Windows

The content below is taken from the original (Remix OS Player is an Android Virtual Machine for Windows), to continue reading please visit the site. Remember to respect the Author & Copyright.

Jide Technology’s Remix OS has been making a fair amount of buzz across the Android market. Remix OS is Jide’s Android-based OS with Android apps,… Read more at VMblog.com.

Free and cheap ways to study for IT certifications

The content below is taken from the original (Free and cheap ways to study for IT certifications), to continue reading please visit the site. Remember to respect the Author & Copyright.

For as long as there have been technology certifications, IT pros have debated their value. Some believe they’re the key to a fatter paycheck, while others contend that they’re often not worth the paper they’re printed on. Others take the middle road and say they can be valuable in the right circumstances, but experience is king.


graduation cap with diploma stacked on books
Thinkstock

The aim of this story is not to add to that debate. This story is for technology professionals who have already decided to pursue a certification, and who are looking for ways to do so without breaking the bank.

Because there’s no denying it: Studying for and taking certification exams can be costly. Instructor-led classes often cost in “the thousands of dollars,” notes Tim Warner, a 15-year IT veteran, author and tech evangelist at Pluralsight, which specializes in online professional technology training. Even computer-based classes, which generally don’t offer direct contact with the instructor, typically cost “in the hundreds of dollars,” he adds.

And once the studying is done you still have to pay for the exams. “On average, exam prices range between $150 and $350 per attempt,” Warner says. “Some IT cert vendors, such as Microsoft, offer 2-for-1 promotions that effectively halve the registration cost. Either way, it’s expensive.”

Fortunately, there are plenty of free and low-cost resources that can help you study for certification exams, and depending on your circumstances, there may be other ways you can cut expenses. You might even be able to save some dough on the exam fees themselves. Later in the story I’ll discuss some inexpensive ways to gain hands-on experience in the subjects you’re studying.

Create an Office 365 dev/test environment in Azure

The content below is taken from the original (Create an Office 365 dev/test environment in Azure), to continue reading please visit the site. Remember to respect the Author & Copyright.

With the Office 365 dev/test environment in Azure, you can follow step-by-step instructions to configure a simplified intranet in Azure infrastructure services, an Office 365 Enterprise E5 subscription, and directory synchronization for Azure Active Directory (AD). With this new dev/test environment, you can:

  • Perform Office 365 application development and testing in an environment that simulates an enterprise organization.
  • Learn about Office 365 Enterprise E5 features, experiencing them from a consequence-free configuration that is separate from your organization’s infrastructure and Office 365 subscription and your personal computer.
  • Gain experience setting up directory synchronization between a Windows Server AD forest and the Azure AD tenant of an Office 365 subscription.

Do all of this for free with Office 365 Enterprise E5 and Azure trial subscriptions.

Build out the Office 365 dev/test environment with these steps:

  1. Create a simulated intranet in Azure infrastructure services.
  2. Add an Office 365 Enterprise E5 subscription.
  3. Configure and test directory synchronization between the Windows Server AD forest of your simulated intranet and the Office 365 subscription.

Here is the progression:

Once complete, you can connect to any of the computers on the simulated intranet with Remote Desktop connections to perform administration, app development, and app installation and testing.

This dev/test environment can also be extended with an Enterprise Mobility Suite (EMS) trial subscription, resulting in the following:

With the Office 365 and EMS dev/test environment, you can test scenarios or develop applications for a simulated enterprise that is using both Office 365 and EMS.

New – Additional Filtering Options for AWS Cost Explorer

The content below is taken from the original (New – Additional Filtering Options for AWS Cost Explorer), to continue reading please visit the site. Remember to respect the Author & Copyright.

New – Additional Filtering Options for AWS Cost Explorer

by Jeff Barr | on | in Cost Explorer | Permalink | Comments

AWS Cost Explorer is a powerful tool that helps you to visualize, understand, and manage your AWS spending (read The New Cost Explorer for AWS to learn more). You can view your spend by service or by linked account, with your choice of daily or monthly granularity. You can also create custom filters based on the accounts, time period, services, or tags that are of particular interest to you.

In order to give you even more visibility into your spending, we are introducing some additional filtering options today. You can now filter at a more fine-grained level, zooming in to see costs at the most fundamental, as-metered units. You can also zoom out, categorizing your usage at a high level that is nicely aligned with the primary components of AWS usage and billing.

Zooming In
As you may have noticed, AWS tracks your usage at a very detailed level. Each gigabyte-hour of S3 storage, each gigabyte-month of EBS usage, each hour of EC2 usage, each gigabyte of data transfer in or out, and so forth. You can now explore these costs in depth using the Usage Type filtering option. After you enter the Cost Explorer and choose Usage Type from the Filtering menu, you can now filter on the fundamental, as-billed units. For example, I can take a look at my day-by-day usage of m4.xlarge instances:

Zooming Out
Sometimes you need more detail, and sometimes you need a summary. Maybe you want to know how much you spent on RDS, or on S3 API requests, or on EBS magnetic storage. You can do this by filtering on a Usage Type Group.  Here is my overall EC2 usage, day-by-day

Here are some of the other usage type groups that you can use for filtering (I had to do some browser  tricks to make the menu this tall):

Available Now
These new features are available now and you can start using them today in all AWS Regions.

Jeff;

 

 

Explore Microsoft Cloud Platform System – delivering Azure experiences in an integrated system

The content below is taken from the original (Explore Microsoft Cloud Platform System – delivering Azure experiences in an integrated system), to continue reading please visit the site. Remember to respect the Author & Copyright.

Are you getting ready for your upcoming Ignite trip? Are you ready to learn how Microsoft Cloud Platform System (CPS) can help you get started with cloud without breaking the integrity of your existing virtualized environments? Join us at BRK2260 session “Explore Microsoft Cloud Platform System – delivering Azure experiences in an integrated system” to learn all about our hybrid cloud vision, new developments, and new possibilities that enable IT organizations to get the best of both public and private cloud infrastructures. Also learn about how you can take advantage of various technologies from Microsoft today to start your cloud journey and plan your investments so that they are aligned with the future. As part of the session, we’d like also to share with you on some real-life customer examples, and use-cases, that are based on CPS as well as best practices.

My name is Cheng Wei, a program manager on the Azure Stack team. And together with my colleagues Walter Oliver & John Haskin, we can’t wait to share with you on all these exciting topics at Ignite and would love to hear what’s hot in your mind and what you would like to discuss with us around this subject.

During the session, you can expect to hear from us on the following areas:

  • Explain Microsoft’s hybrid cloud vision
  • Introduce CPS product family (CPS Premium and CPS Standard)
  • Explain WAP / CPS and Azure Stack co-existing strategy and experience
  • Demo the experiences after connecting WAP to Azure Stack

Please note that not everything we’ll share at this session will be available at the Technical Preview 2 release. So don’t miss this opportunity to come learn and see the demo of how to continue your cloud investment with WAP/CPS today and connect them with Azure Stack next year when it’s released!

Again, if you’re coming to Ignite, we’d love to hear your thoughts on if there is anything else you’d like to see and hear from this session, or if you have any specific questions that you’d like to start discussing with us. Feel free to follow us @cheng__wei, @walterov, and @AzureStack for more updates on this and other Microsoft Azure Stack session topics.

Thanks and look forward to meeting some of you at @MS_Ignite!

AzureStack_Ignite

Are You Putting Lipstick On A Pig? 5 Signs Your Security Is Outdated

The content below is taken from the original (Are You Putting Lipstick On A Pig? 5 Signs Your Security Is Outdated), to continue reading please visit the site. Remember to respect the Author & Copyright.

Article Written by Sami Laine, Principal Technologist, CloudPassage The security industry is constantly kicked around for dropping the ball:… Read more at VMblog.com.

Hyper-converged hyper-contender ZeroStack starts connecting clouds

The content below is taken from the original (Hyper-converged hyper-contender ZeroStack starts connecting clouds), to continue reading please visit the site. Remember to respect the Author & Copyright.

HyperConverged HyperContender ZeroStack has started connecting public clouds to its on-premises kit.

ZeroStack promises the usual “Our beautiful GUI and cunning plumbing means that if you turn on our boxes and go make a cup of coffee there’s be VMs running before the cappuccino foam falls” experience. The likes of VMware, VCE, Scale Computing Nutanix and SimpliVity say that too. And like ZeroStack they all rely on dense 2U servers to make the magic happen.

ZeroStack’s schtick is that it works with KVM – hello, low acquisition cost – and says it can get you to a complex hybrid cloud without the need to hire any expensive architecture folks. It’s validated servers from Dell, HP, Supermicro and plans more hardware partners real soon now. It also emphasises analytics that warn you in advance when storage is running low or something is awry before it ruins your weekend.

The company’s been selling this stuff since March 2016, has Series B finance to help it along and claims “double digit” customer numbers but didn’t tell The Register if it’s closer to 10 than 99.

The company will say it’s just released some code that moves workloads from ESX into its own environment. It promises dependencies will make the jump, too, without need for re-plumbing. And makes the same promise for workloads flowing from its own boxen to Amazon Web Services, then back again.

HyperConverged systems are a busy market with clear and cashed-up leaders, general agreement on the need for tight coupling between hardware and software and a known Big Moment on the way in the form of Microsoft’s Azure Stack arriving. So good luck, ZeroStack! ®

Sponsored:
Optimizing the hybrid cloud

West Midlands Police become first force to target ‘close pass’ drivers

The content below is taken from the original (West Midlands Police become first force to target ‘close pass’ drivers), to continue reading please visit the site. Remember to respect the Author & Copyright.

Officers on bikes are targetting drivers who pass too close to cyclists with proceedings against 38 people already in motion

wmppolice

Officers on bikes are targetting drivers who pass too close to cyclists with proceedings against 38 people already in motion

Stripped and ready to go: Enterprise Java MicroProfile lands

The content below is taken from the original (Stripped and ready to go: Enterprise Java MicroProfile lands), to continue reading please visit the site. Remember to respect the Author & Copyright.

The project for a lightweight and modular enterprise Java suited to microservices has hit general release.

MicroProfile 1.0 has now hit general availability, just over two months after the project was unveiled by representatives of IBM, Red Hat, Tomitribe, Payara and the London Java Community on June 27.

A formal announcement is expected at Oracle’s annual JavaOne conference in San Francisco next week.

MicroProfile has already seen early initial implementations by Red Hat in its WildFly Swarm microservices runtime and IBM in its Liberty WebSphere Application Server.

The speedy delivery of MicroProfile 1.0 was enabled in part by the fact MicroProfile uses existing elements of the Java EE stack. The group worked on utilizing JAX-RS, CDI and JSON-P.

Long term, the idea is to agree key interfaces and specifications. These will be wrapped into different MicroProfiles, meaning a choice of packages so people aren’t compelled to swallow to the full fat Java EE stack. You’d then certify against the MicroProfile to demonstrate compliance.

Discussions are now underway for MicroProfile 2.0. Talks are spanning the addition of asynchronous reactive event processing, big data and some form of support for the Netflix open source software projects. There’s no date for version 2.0.

Oracle is not a member of the MicroProfile project.

Asked during a Java Community Process (JCP) meeting on August 9 whether Oracle planed to collaborate with the MicroProfile team, Oracle hedged.

Anil Gaur, Oracle group vice president with responsibility for Java EE and WebLogic Server, told the group he’d like to see “the two efforts come together” and had spoken to Red Hat. However, there was no definitive answer at that time.

The size of things Java is an age-old issue – as, too, is concern about the speed of development of the language and various runtimes and their ability to keep current with changing trends in software and among developers.

During the mid 2000s, the concern was Java language and JDK were getting fat, and there was a growing appetite to place them on an API diet. ®

Sponsored:
Flash storage buyer’s guide

New Azure Office 365 Regions Go Live in UK

The content below is taken from the original (New Azure Office 365 Regions Go Live in UK), to continue reading please visit the site. Remember to respect the Author & Copyright.

Server hero sql

Server hero sql

This post will discuss the effects on administrators of the launch of the new regions for Azure and Office 365 in the United Kingdom (UK).

Announcement

Adoption of cloud services in Europe Union (EU with a population of 508 million people) has been patchy, despite there being two EU-based Azure regions (Dublin and Amsterdam) and four EU-based Office 365 locations (Dublin, Amsterdam, Finland and Austria). This is thanks to:

  • Microsoft being an American corporation.
  • Aggressive attitudes of the USA government towards non-American customer data in American-owned data centres outside of the USA.

Some markets, such as Germany and the UK have not reacted well. In Germany, Microsoft is building 2 Azure regions that will be operated by a German-owned trustee, therefore circumventing USA federal laws.

The UK is a very interesting situation. Maybe Microsoft had amazing foresight, and maybe they got lucky. Microsoft’s announcement that they had just launched new data center regions in the UK made it clear that they wanted to attract Office 365 and Azure business from British government agencies. But recent events made Microsoft’s new regions even more important. The UK is a large market; it has a population of 64 million, London is one of the world’s financial hubs, and it has the fifth largest economy (GDP) in the world. What’s more, the UK is officially on the path towards leaving the European Union (see “Brexit”) and this uncertain plan might leave this very large market for Microsoft isolated outside of the European Union, from US/EU data agreements, and from the older Microsoft data center regions, depending on how the legal dice land.

What was Launched?

The Microsoft announcements on this aren’t very clear, but I eventually found some clarity after a lot of digging. Three new data center locations were opened:

  • London, in south England. It’s likely near London rather than in London because of property prices and other risks of being in such a large risk zone.
  • Cardiff, in Wales. This might actually be Newport which is close to Cardiff – one of the Azure ExpressRoute POPs is in Newport, which appears to be a hot location for data centers.
  • Durham, in northeast England.

Azure has an additional 2 regions:

  • UK South, in the London facility.
  • UK West in the Cardiff facility.

These regions are paired, meaning that optional replication will be contained within the borders of the UK, as it stands now – the after effects of Brexit might lead to a breakup of the UK into its 4 countries.

Office 365 is located in:

  • The London facility
  • The Durham facility.

If you brose the Office 365 interactive data center map then you’ll see that only Exchange Online and SharePoint Online are available in these regions at this time. Other services such as Skype for Business, Planner, Azure AD, and Dynamics CRM Online are run out of Ireland and Netherlands. Sway and Yammer are actually run out of the United States (not in the EU at all!). Dynamics CRM Online is coming to the UK in early 2017.

Note that Microsoft is the first of the major cloud vendors to open up data centers in the UK. Every other facility in the EU has been either located on the continent or in Ireland.

Sponsored

Impact on Azure

The two new Azure regions are generally available, and should have been available to use to customers immediately. I was able to deploy my first virtual machine in UK West with no issues, but there was some talk that not everyone could see the new UK regions in the Azure Portal – this was a bug, if there was really an issue.

If you view the list of available services in the two UK regions, you’ll notice two things:

  1. Older services, such as the D-Series virtual machines which were succeeded by the Dv2-Series, are not available.
  2. Some “new” services, such as Azure Container Service, HDInsight, and IoT are not available yet.

I don’t expect that older services will be deployed in the new regions – why would Microsoft build Hyper-V hosts from old hardware? If you need the lower prices of older service options, then you’ll need to consider alternative Azure regions.

As for the lack of availability of newer features, this is just a cloud thing. In the world of the cloud, a service provider will get core features operational as quickly as possible, so as to get something into customers’ hands and to generate revenue. Then the service provider starts working on the other elements, and bring them online over time. So don’t despair if you want to do HDInsight in the UK regions – it’s probably just a matter of time until Microsoft has built out the hardware and the software.

What about Office 365?

If you are located in the UK then your data will not be auto-magically moved to the new locations near London and in Durham. Microsoft does not move Office 365 customers when new locations are opened:

Existing customers that have their core customer data stored in an already existing datacenter region are not impacted by the launch of a new datacenter region.

You can request to have your data moved. There are some guidelines:

  • It can take up to 24 months to move your data.
  • Microsoft cannot predict when your data move will complete, because every customer is different.
  • A data move should not affect service availability.
Sponsored

Customers that are new to Office 365 will be automatically located in the new regions if they are suitable:

New customers or Office 365 tenants created after the availability of the new datacenter region will have their core customer data stored at rest in the new data center region automatically.

The post New Azure Office 365 Regions Go Live in UK appeared first on Petri.

Cloudways Introduces Industry’s First Managed Cloud Hosting for Containers

The content below is taken from the original (Cloudways Introduces Industry’s First Managed Cloud Hosting for Containers), to continue reading please visit the site. Remember to respect the Author & Copyright.

Cloudways now provides fast, affordable, and scalable container cloud hosting. The cloud platform provisions Kyup containers that come with… Read more at VMblog.com.

Citrix Releases XenDesktop 7.11 – Adds Microsoft Support, Improved User Experience, Simplified Management and More

The content below is taken from the original (Citrix Releases XenDesktop 7.11 – Adds Microsoft Support, Improved User Experience, Simplified Management and More), to continue reading please visit the site. Remember to respect the Author & Copyright.

This week, Citrix announced the release of XenDesktop 7.11, the follow-up version to XenDesktop 7.9 released in Q2 2016. If you’re wondering if you… Read more at VMblog.com.

Virgin Atlantic turned industrial waste into greener jet fuel

The content below is taken from the original (Virgin Atlantic turned industrial waste into greener jet fuel), to continue reading please visit the site. Remember to respect the Author & Copyright.

Illinois-based LanzaTech and Virgin Atlantic have been working on an alternative fuel source for Sir Richard Branson’s flagship airline since 2011. This week, the two companies announced a breakthrough that could drastically reduce the airline industry’s carbon emissions. LanzaTech has produced 1,500 US gallons of jet fuel derived from the industrial gases given off by steel mills.

The LanzaTech fuel was created by capturing these gases, which would have otherwise been dispersed into the atmosphere, and converting them to a low-carbon ethanol called "Lanzanol" through a fermentation process. As the New Zealand Herald reports, the Lanzanol was produced in China at the Roundtable of Sustainable Biomaterials-certified demonstration center in Shougang and then converted to jet fuel using a process developed alongside the Pacific Northwest National Lab and the US Department of Energy. While initial tests show the Lanzanol fuel could result is as much as 65 percent less carbon emission than conventional jet fuel, it will need to pass a few more tests before it can be used in an commercial setting. Still, Branson believes Virgin Atlantic could make a Lanzanol-powered "proving flight" as early as 2017.

According to LanzaTech, the company could implement their technology at 65 percent of the world’s steel mills, allowing the company to produce 30 billion gallons of Lanzanol annually. That’s enough to create 15 billion gallons of cleaner-burning jet fuel and replace about one-fifth of all the aviation fuel used yearly worldwide.

Source: Virgin, New Zealand Herald

OpenStack Developer Mailing List Digest September 10-16

The content below is taken from the original (OpenStack Developer Mailing List Digest September 10-16), to continue reading please visit the site. Remember to respect the Author & Copyright.

Nominations for OpenStack PTLs Are Now Open

  • Will remain open until September 18 23:45 UTC
  • Submit a text file to the openstack/election repository [1].
    • File name convention: $cycle_name/$project_name/$ircname.txt
  • In order to be an elgible candidate (and be allowed to vote) you need to have contributed an accepted patch to one of the program projects during the Mitaka-Newton timeframe.
  • Additional information [2].
  • Approved candidates [3]
  • Elections will start at September 19, 2016 00:00 UTC until September 25 23:45 UTC
  • Full thread

Ocata Design Summit – Proposed Slot Allocation

  • Proposed slot allocation for project teams at the Ocata design summit in Barcelona [4] based on requests current PTLs have made and adjusted for limit space available.
  • Kendall Nelson and Thierry will start laying out those sessions over the available rooms and time slots.
  • Communicated constraints (e.g. Manila not wanting to overlap with Cinder) should be communicated to Thierry asap.
  • If you don’t plan to use all of your slots, let Thierry know so they can be given to a team that needs them.
  • Start working with your team on content you’d like to cover at the summit and warm up those etherpads!
  • Full thread

OpenStack Principles

  • A set of OpenStack principles is proposed [5] to accurately capture existing tribal knowledge as a prerequisite for being able to have an open and productive discussions about changing it.
  • Last time majority of the Technical Committee were together, it was realized that there were a set of unspoken assumptions carried and used to judge things.
    • These are being captured to empower everyone to actually be able challenge and discuss them.
  • The principles were started by various TC members who have governance history and know these principles. This was in attempt to document this history to commonly asked questions. These are not by any means final, and the community should participate in discussing them.
  • Full thread

API Working Group News

  • Recently merged guidelines
    • URIs [6]
    • Links [7]
    • Version string being parsable [8]
  • Guidelines Under review
    • Add a warning about JSON expectations. [9]
  • Full thread

 

[1] – http://bit.ly/2cjtN3W

[2] – http://bit.ly/2cEEA6v

[3] – http://bit.ly/2cEGtQe

[4] – http://bit.ly/2cEDuYp

[5] – http://bit.ly/2cjrSwE

[6] – http://bit.ly/2cjsBO6

[7] – http://bit.ly/2cEDxmV

[8] – http://bit.ly/2cjspyF

[9] – http://bit.ly/2cEDZ4C

 

Enermax develops case fans that clean themselves

The content below is taken from the original (Enermax develops case fans that clean themselves), to continue reading please visit the site. Remember to respect the Author & Copyright.

enermax_dustfree_fans
Case fans are good at keeping your computer running at optimal temperatures. They’re also good at sucking up dust and getting themselves nice and dirty. Dust build-up on blades can seriously mess with […]

Tiny $2 IoT module runs FreeRTOS on Realtek Ameba WiFi SoC

The content below is taken from the original (Tiny $2 IoT module runs FreeRTOS on Realtek Ameba WiFi SoC), to continue reading please visit the site. Remember to respect the Author & Copyright.

Pine64’s $2 “PADI IoT Stamp” module is based on Realtek’s new “RTL8710AF” Cortex-M3 WiFi SoC, a cheaper FreeRTOS-ready competitor to the ESP8266. Realtek’s RTL8710AF WiFi system-on-chip began showing up on tiny “B&T” labeled modules in July in China on AliExpress, as described in this Hackaday post. The Realtek SoC offers an even lower cost, and […]

500px’s new app lets pros edit RAW photos, license their work and find jobs

The content below is taken from the original (500px’s new app lets pros edit RAW photos, license their work and find jobs), to continue reading please visit the site. Remember to respect the Author & Copyright.

500px, the photo-sharing service aimed at professional photographers, is today launching a new mobile app aimed at serving the needs of this audience by offering editing tools along with a way to license their images through the company’s custom photography service.

The company last year rolled out its global photography on-demand service to select clients, and now the new app, called RAW, will make it more accessible to the company’s wider user base.

The app includes tools that allow photographers to capture and edit files in the RAW format and arrives only days after Adobe announced a similar enhancement in its own Lightroom Mobile for iOS application. The new Adobe app, however, offers the ability to capture and edit raw photos using Adobe’s Digital Negative (DNG) file format.

Both releases point to the advances mobile phones have made in terms of becoming tools capable of being used by pro photographers.

raw-by-500px-library

In addition to editing RAW photos, the new 500px app also offers a variety of editing tools like hue, saturation, luminance controls and the ability to create and use custom filters, including those created by the 500px community.

When the editing process is complete, users can upload and license their photos to the 500px community along with social networks. They can also create, export and attach model releases to their photos.

raw-by-500px-filters

The company also plans to use the app to help photographers find jobs in the near future, it says. An “Assignments” section will soon begin alerting photographers who opt in about photo jobs nearby. The company is already working with partners including Airbnb, Google and Lonely Planet, it notes, and plans to use assignments to connect its 8 million users to on-demand jobs.

The app is a free download on iTunes.

Twitter rolls out new features for businesses running customer service accounts

The content below is taken from the original (Twitter rolls out new features for businesses running customer service accounts), to continue reading please visit the site. Remember to respect the Author & Copyright.

Twitter today is rolling out a series of new features designed to help users better connect with businesses offering customer support through their official Twitter accounts. Now, those businesses will be able to clearly display on their profile if their account offers customer service, as well as which times those accounts are active.

The business can now indicate if it offers service via a new Customer Support settings page on the Twitter Dashboard website. Once enabled, the business’s Twitter profile will read that it “Provides Support.” This option will also turn on the account’s ability to receive Direct Messages from anyone. In other words, the business will no longer need to request that customers follow them back so they can send a private message.

screen-shot-2016-08-23-at-11-37-37

This “Provides support” detail will also show when people search for accounts – including when they @mention the company in a tweet, for example, or begin typing a Direct Message.

In addition, the business can also now choose to display the hours of customer service availability on its profile, which will help set expectations in terms of when a reply may be received.

When customers visit these customer service Twitter accounts, they’ll also see a new, more prominent button to start a Direct Message with the business in question.

TechCrunch had previously reported on the larger Direct Messages button’s existence when the company was testing the feature in the wild. Accounts like @AppleSupport, @Uber_Support, @BeatsSupport, @ATVIAssist (Activision Support), and others were among the early testers. T-Mobile, which is highly active on Twitter, has also now adopted the new features.

img_3729

As we noted at the time, the rollout of the larger Direct Messages button – which takes over the full space where the “Tweet to” and “Message” buttons used to live side-by-side – encourages users to start a private conversation with the business, instead of publicly tweeting at them. This could move some of the more negative comments that frustrated customers make on Twitter to the business’s private channel instead.

The changes also position Twitter to better compete with Facebook, which had rolled out a feature to its Pages users in the past which showed Facebook users how responsive the business is to customer inquires. (However, the most recent Page redesign seems to have done away with this informational text for the time being.)

The new additions follow on other customer service features Twitter had previously launched, including Direct Message links and Customer Feedback cards.

Six Google Cloud Platform features that can save you time and money

The content below is taken from the original (Six Google Cloud Platform features that can save you time and money), to continue reading please visit the site. Remember to respect the Author & Copyright.

Posted by Greg Wilson, Head of Developer Advocacy, Google Cloud Platform

Google Cloud Platform (GCP) has launched a ton of new products and features lately, but I wanted to call out six specific features that were designed specifically to help save customers money (and time).


VM Rightsizing Recommendations

Rightsizing your VMs is a great way to avoid overpaying  and underperforming. By monitoring CPU and RAM usage over time, Google Compute Engine’s VM Rightsizing Recommendations feature helps show you at a glance whether your machines are the right size for the work they perform. You can then accept the recommendation and resize the VM with a single click.

Docs
Google Compute Engine VM Rightsizing Recommendations announcement


Cloud Shell

Google Cloud Shell is a free VM for GCP customers integrated into the web console with which to manage your GCP resources, to test, build, etc. Cloud Shell comes with many common tools pre-installed, including Google Cloud SDK, Git, Mercurial, Docker, Gradle, Make, Maven, npm, nvm, pip, iPython, MySQL client, gRPC compiler, Emacs, Vim, Nano and more. It also has language support for Java, Go, Python, Node.js, PHP and Ruby, and has built-in authorization to access GCP Console projects and resources.

Google Cloud Shell documentation


Google Cloud Shell overview
Google Cloud Shell documentation
Using Cloud Shell: YouTube demo
Google Cloud Shell GA announcement

Custom Machine Types

Compute Engine offers VMs in lots of different sizes, but when there’s not a perfect fit, you can create a custom machine type with exactly the number of cores and memory you need. Custom Machine Types has saved some customers as much as 50% over a standard-sized instance. 

Preemptible VMs 

For batch jobs and fault-tolerant workloads, preemptible VMs can cost up to 70% less than normal VMs. Preemptible VMs fill the spare capacity in our datacenters, but let us reclaim them as needed, helping us optimize our datacenter utilization. This allows the pricing to be highly affordable. 

Preemptible VMs overview
Preemptible VMs docs
Preemptible VMs announcement 
Preemptible VMs price drop

Cloud SQL automatic storage increases

When this Cloud SQL feature is enabled, the available database storage is checked every 30 seconds, and more is added as needed in 5GB to 25GB increments, depending on the size of the database. Instead of having to provision storage to accommodate future database growth, the storage grows as the database grows. This can reduce the time needed for database maintenance and save on storage costs.

Cloud SQL automatic storage increases documentation

Online resizing of persistent disks without downtime

When a Google Compute Engine persistent disk is reaching full capacity, you can resize it in-place, without causing any downtime.

Google Cloud Persistent Disks announcement
Google Cloud Persistent Disks documentation
Adding Persistent Disks: YouTube demo

As you can see, there are plenty of ways to save money and improve performance with GCP features. Have others? Let us know in the comments.

Chrome OS gets cryptographically verified enterprise device management

The content below is taken from the original (Chrome OS gets cryptographically verified enterprise device management), to continue reading please visit the site. Remember to respect the Author & Copyright.

Companies will now be able to cryptographically validate the identity of Chrome OS devices connecting to their networks and verify that those devices conform to their security policies.

On Thursday, Google announced a new feature and administration API called Verified Access. The API relies on digital certificates stored in the hardware-based Trusted Platform Modules (TPMs) present in every Chrome OS device to certify that the security state of those devices has not been altered.

Many organizations have access controls in place to ensure that only authorized users are allowed to access sensitive resources and they do so from enterprise-managed devices conforming to their security policies.

Most of these checks are currently performed on devices using heuristic methods, but the results can be faked if the devices’ OSes are compromised. With Verified Access, Google plans to make it impossible to fake those results in Chromebooks.

Organizations will be able to integrate their WPA2 EAP-TLS networks, VPN servers, and intranet pages that use mutual TLS-based authentication with the Verified Access API through the cloud-based Google Admin console.

The cryptographic verification mechanism can be used to guarantee the identity of a Chrome OS device and user, but more importantly to ensure that they have the proper verified boot mode device policy or user policy as specified by the domain admin.

“When integrating with an enterprise CA, for instance, hardware-protected device certificates can be distributed only to managed, verified devices,” Saswat Panigrahi, senior product manager for Chrome for Work, said in a blog post.

However, before organizations can use the new feature, they need to install a special extension on their Chrome OS devices and to have network services that understand the Verified Access protocol. That’s why Google is inviting identity, network, and security providers to integrate their products with its new API.

Rackspace Extends Managed Application Services to Microsoft Azure

The content below is taken from the original (Rackspace Extends Managed Application Services to Microsoft Azure), to continue reading please visit the site. Remember to respect the Author & Copyright.

Rackspace today announced the release of Managed Application Services for Microsoft Azure. This new offering introduces Rackspace’s premium Fanatical Read more at VMblog.com.

VMware flings vCenter Server away from Windows, if you want

The content below is taken from the original (VMware flings vCenter Server away from Windows, if you want), to continue reading please visit the site. Remember to respect the Author & Copyright.

VMware has released vSphere 6.0 Update 2m, with the main feature being a vCenter Server for Windows to vCenter Server Appliance migration tool.

vCenter is VMware’s key management application and comes in two flavours. vCenter Server for Windows runs on Microsoft’s famous operating systems and lets you manage VMs. VMware has spent the last couple of years bringing the other version, the vCenter Server Appliance, to feature parity with the Windows version. The appliance is a Linux-based beast that ships as a VM and by many accounts just starts running.

Because the Appliance is an appliance, you don’t need an operating system licence to run it. Nor do you need a database: the Appliance includes a vPostgres database. The Windows version needs either SQL Server or Oracle. And as we know, Oracle has form being hostile to those who virtualise its products on VMware.

VMware also likes to point out that Oracle, SQL Server and Windows all cost money, so using the Appliance means you could save some dough and precious licences for VMs that do stuff other than manage VMs.

Perhaps the most interesting thing about this situation is that users asked for it. VMware allows the development of “Flings”, useful tools that it releases without initial support. This new converter started life as a Fling, became popular and is now a product.

What does it do? Simple: it takes your vCenter Server for Windows setup and teleports it into a vCenter Server Appliance, apparently in very few clicks and with all data uplifted into the vPostgres.

The migration tool is the only new inclusion in this update to vSphere. ®

Sponsored:
Application managers: What’s keeping you up at night?

PCI Council wants upgradeable credit card readers … next year

The content below is taken from the original (PCI Council wants upgradeable credit card readers … next year), to continue reading please visit the site. Remember to respect the Author & Copyright.

The Payment Card Industry Security Standards Council (PCI Council) has floated a new standard it hopes will reduce credit card fraud that starts at the point of sale, in part by allowing easier upgrades.

The new version 5.0 of the PCI PIN Transaction Security (PTS) Point-of-Interaction (POI) Modular Security Requirements emerged late last week. The most notable new bits of the proposed standard (PDF) are:

  • A new control that means point of sale card readers “… must support firmware updates. The device must cryptographically authenticate the firmware and if the authenticity is not confirmed, the firmware update is rejected and deleted.”
  • Tamper-proofing requirements so that an attack involving “drills, lasers, chemical solvents, opening covers, splitting the casing (seams), and using ventilation openings” results in devices becoming inoperable and deleting all data;
  • Requirement that devices be verifiably immune to leaking keys if probed using side-channel methods such as monitoring for electromagnetic emanations;

The changes have been made in response to the prevalence of card skimming attacks and as recognition that retailers need the ability to respond quickly as threats emerge. Hard-to-upgrade card-reading kit retards security efforts as retailers resist expensive upgrades when they address obscure attacks. Making card readers upgradeable should mean better point of sale security. That the the United States is now adopting the chip-and-pin (EMV) wireless payment technology so prevalent elsewhere is also cited as a reason for the new round of changes.

But even pre-EMV technologies need better wireless security: Samsung’s Magnetic Secure Transmission (MST) technology lets phones talk to magnetic stripe readers. Even those small exchanges of electromagnetic energy are potentially sniffable, with keys a bigger prize than individual cards.

The new standard comes into force in September 2017, when the current version 4.1 will fade away. ®

Sponsored:
Application managers: What’s keeping you up at night?

Deploy OMS Monitoring to Azure Virtual Machines

The content below is taken from the original (Deploy OMS Monitoring to Azure Virtual Machines), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft-Azure-cloud-hero

Microsoft-Azure-cloud-hero

In this post I will show you two ways to deploy Azure Log Analytics (OMS) monitoring to Azure virtual machine, and to some of the services running in those machines.

Monitor Virtual Machine Logs

The first method that I am showing you is possible, but not optimal. You can configure Azure virtual machines to write the logs of some services to a storage account. OMS is capable of gathering the logs listed below that originated from a virtual machine from a storage account:

  • Linux Syslog: Logs from a Linux guest OS.
  • Windows Event: Classic logs from a Windows guest OS.
  • IIS Log: Logs generated by IIS in a Windows guest OS
  • Windows ETWEvent: Logging that a developer can enable.

This capability means that instead of trying to troubleshoot applications, such as a website that is load balanced across many machines, on one machine at a time, you have a central repository of log data that you can query or create alerts from.

I will need a storage account to store my log data. You could reuse the storage account that the virtual machines are stored in, but I prefer to create a dedicated storage account in a systems management resource group. I have created a general purpose storage account on standard storage in a resource group called rg-sysmgmt-01. This storage account will store all log data from virtual machines in the same region.

The virtual machines must be configured to write their logs to this storage account. Open the settings of your virtual machines and browse to Diagnostics. Make sure the status is set to On. Click Storage Account and select the storage account that you have created for the purpose of storing diagnostics data. Then select the logs from the guest OS that you want to write to this storage account. The screen shot below shows an example of a Windows Server virtual machine. Save the settings and repeat this process with every other machine that you want to gather logs from.

 

 Configure the Azure virtual machines to write logs to the storage account [Image Credit: Aidan Finn]

Configure the Azure virtual machines to write logs to the storage account [Image Credit: Aidan Finn]

The next step is to configure Log Analytics (OMS) to gather logs from those storage accounts. Open the settings of your OMS instance and browse to Storage Account Logs. Here you can create an entry for each type of log that you can gather. Note that many elements of Azure can write logs to a storage account, not just virtual machines. This post is focusing on virtual machines, so I am going to gather IIS Logs and Events. Therefore, I will create two entries under Storage Account Logs.

Sponsored

Click Add and select the storage account that your logs are being written to. Under Data Type, select IIS Logs and click OK. Click Add again, select the storage account again, and select Events under Data Type. OMS is now configured to gather those two types of logs from the diagnostics-enabled virtual machines.

Note that Microsoft recommends using the Log Analytics VM extension for deeper insight into Windows and Linux logs. That’s what we’ll look at next.

Monitor Virtual Machines by Extension

You can monitor Azure virtual machines using the Log Analytics VM extension; this is an agent that is deployed to the virtual machine from your OMS instance or workspace.

To deploy the extension, browse to Virtual Machines in the settings of the Log Analytics (OMS) instance. Here you can see each of the virtual machines that your OMS workspace can monitor. You can filter this list if you have a lot of virtual machines.

Select a virtual machine; this opens a new blade where you can click Connect to enable monitoring for this virtual machine. You don’t need to stay on this blade to wait for the connection process to complete. Repeat this for every virtual machine.

Azure virtual machines being monitored by Log Analytics (OMS) {Image Credit: Aidan Finn]

Azure virtual machines being monitored by Log Analytics (OMS) {Image Credit: Aidan Finn]

Sponsored

A few minutes later, the virtual machines will switch to a Connected state in your workspace, meaning that the machines are now monitored by OMS.

 

 

The post Deploy OMS Monitoring to Azure Virtual Machines appeared first on Petri.

Cloud management enters phase 2: Decentralization

The content below is taken from the original (Cloud management enters phase 2: Decentralization), to continue reading please visit the site. Remember to respect the Author & Copyright.

Cloud computing is about centralization, the ability to push workloads that run on expensive on-premises systems to cheaper systems on public clouds.

This centralization lets us share resources with other tenants to reduce costs, thus increasing overall efficiency. Centralization provides a single location for data that may have once existed in many enterprise silos. Plus, it lets you centrally manage processes and systems with a single set of tools.

Essentially, we’re moving to cloud-based platforms to take back control of our systems and do a much better job of managing those systems centrally. The cloud’s cost advantage is really a bonus; the cloud would be a better option even if its costs were the same as the traditional on-premises data center.

You can think of this centralization as Phase 1 of the great cloud migration: getting all those apps and data out of their datacenters and client silos into a single location, the cloud.

Despite all the advantages of centralization, the cloud will likely shift to a distributed approach over the next decade. It will move the data processing back to the ultimate consumers of those processes and data. That’s Phase 2 of the great cloud migration: re-creating the notion of client/server computing, with the cloud being the server and the workloads (but not their applications or data) moving back to the clients.

This is not a repudiation of centralization or a shift from centralization. Yes, workloads will move closer to those who use them. But the workloads will remain centrally managed and controlled, even if they execute across a widely distributed architecture. They remain logically centralized whether or not their bits and processing are physically distributed.

The cloud platform automatically distributes the workloads on private cloud instances, which are tightly coupled with the centralized public clouds. To those who manage and use the systems, the workloads are in the public cloud, but deployed in a hybridlike distributed architecture.

Current technology can already do this. Ironically, once we move to a centralized cloud-based architecture, all possibilities are open — including putting the workloads back on-premises, minus the silos. IT is a funny world.