Virtual Machine Serial Console access

The content below is taken from the original ( Virtual Machine Serial Console access), to continue reading please visit the site. Remember to respect the Author & Copyright.

Ever since I started working on the Virtual Machine (VM) platform in Azure, there has been one feature request that I consistently hear customers asking for us to build. I don’t think words can describe how excited I am to announce that today we are launching the public preview of Serial Console access for both Linux and Windows VMs.

Managing and running virtual machines can be hard. We offer extensive tools to help you manage and secure your VMs, including patching management, configuration management, agent-based scripting, automation, SSH/RDP connectivity, and support for DevOps tooling like Ansible, Chef, and Puppet. However, we have learned from many of you that sometimes this isn’t enough to diagnose and fix issues. Maybe a change you made resulted in an fstab error on Linux and you cannot connect to fix it. Maybe a bcdedit change you made pushed Windows into a weird boot state. Now, you can debug both with direct serial-based access and fix these issues with the tiniest of effort. It’s like having a keyboard plugged into the server in our datacenter but in the comfort of your office or home.

Serial Console for Virtual Machines is available in all global regions starting today! You can access it by going to the Azure portal and visiting the Support + Troubleshooting section. See below for a quick video on how to access Serial Console.

SerialConsole-LinuxAccess

Support for Serial Console comes naturally to Linux VMs. This capability requires no changes to existing images and will just start working. However, Windows VMs require a few additional steps to enable. For all platform images starting in March, we have already taken the required steps to enable the Special Administration Console (SAC) which is exposed via the Serial Console. You can also easily configure this on your own Windows VMs and images, outlined in our Serial Console documentation. From the SAC, you can easily get to a command shell and interact with the system via the serial console as shown here:

SerialConsole-PrivatePreviewWindows

Serial Console access requires you to have VM Contributor or higher privileges to the virtual machine. This will ensure connection to the console is kept at the highest level of privileges to protect your system. Make sure you are using role-based access control to limit to only those administrators who should have access. All data sent back and forth is encrypted in transit.

I am thrilled to be offering this service on Azure VMs. Please try this out today and let us know what you think! You can learn more in this episode of Azure Friday’s, this Monday’s special episode of Tuesday’s with Corey on Serial Console, or in our Serial Console documentation.

 

Thanks,

Corey

Europe dumps 300,000 UK-owned .EU domains into the Brexit bin

The content below is taken from the original ( Europe dumps 300,000 UK-owned .EU domains into the Brexit bin), to continue reading please visit the site. Remember to respect the Author & Copyright.

Bureaucrats break internet norms by vowing to ban Blighty-based bods from Euro TLD

Brexit has hit the internet, and not in a good way.…

The SMB’s Essential Disaster Recovery Checklist

The content below is taken from the original ( The SMB’s Essential Disaster Recovery Checklist), to continue reading please visit the site. Remember to respect the Author & Copyright.

While almost every business would agree that it’s essential to have a disaster recovery (DR) plan, the sad fact is that not all businesses do. Most of those businesses that don’t have a DR plan tend to be the smaller and medium sized businesses (SMBs) that actually need a DR plan the most. Many of these SMBs could potentially go out of business if they were hit by a disaster and they were not able to effectively recover.

Today’s businesses need more availability and uptime than at any point in the past. Plus, they need to be able to deal with potential threats like ransomware as well as disasters and site outages. Creating an essential DR checklist is an important starting point for enacting your DR strategy. Let’s take a closer look at the main points that should be on your essential DR checklist.

  • Identify your critical business processes – Not all processes are created equal. You first need to identify your most important business applications. These typically are the essential applications that your business needs to operate on a daily basis. Ideally, IT should meet with the business management and/or applications owners to identify and prioritize these applications.
  • Backup and optionally replicate your critical servers – Backup is the foundation for all DR strategies. Backup enables you to restore your servers to a known-good point in time enabling you to recover your essential IT operations. In addition, while it’s no fun, your backups need to be periodically tested. Various backup products make this easier by providing the ability to automatically test backup and restore validity. While backup provides basic protection there is the problem of data loss between the time of the backup and the disaster event. Replication is another important technology that can be a key part of your DR plan. Replication enables you to vastly improve your recovery times and reduce data loss by providing one or more replicas of your protected servers. Replication reduces data loss by providing much more frequent replication intervals than backups. Some products provide near real-time replication capabilities. In addition, restore time is typically far faster as a replica can be quickly brought online without a lengthy restore process.
  • Make sure you have an offsite backup – Having at least one copy of your backups offsite is necessary in order to recover from a site disaster. Site disasters like fires, floods or hurricanes can render an entire location along with all of its computing resources to be unusable. Keeping backup copies offsite protects them from local events ensuring that you have at least one good backup copy to use during your recovery.
  • Have a restore target (most likely in the cloud) – It’s great to have a backup but it’s even better to have somewhere to restore it to. While many larger businesses have their own private DR sites this type of technology and its accompanying expenses is far beyond the reach of most SMBs. However, using the cloud as a DR site is possible for most businesses. The cloud can house replica VMs that can be quickly started up in the event of a disaster or you can use it to restore your backups to cloud-based VMs.
  • Make assigned DR roles – Having backups and DR targets that you can restore them to lays the foundation for a successful DR plan but no plan runs itself. It’s people that put your plans into action. You need to assign well-defined roles and responsibilities to the different IT and other business personnel that are needed to help recover from a disaster.

An effective DR plan is essential and can be the difference for your business to either survive a disaster or to possibly be put out of business entirely. If you don’t have a DR plan in place following this DR checklist can be a good way to get started.

The post The SMB’s Essential Disaster Recovery Checklist appeared first on Petri.

connectwise (17.1.1.0)

The content below is taken from the original ( connectwise (17.1.1.0)), to continue reading please visit the site. Remember to respect the Author & Copyright.

Connectwise installer for internal use

Choosing the Best Mobile Office 365 Email Client

The content below is taken from the original ( Choosing the Best Mobile Office 365 Email Client), to continue reading please visit the site. Remember to respect the Author & Copyright.

Moving Mobile Email to Office 365

I am frequently asked to recommend the best mobile client to use with Office 365. Usually, the question is what email client to use because it is in the context of a company moving from on-premises Exchange to Exchange Online. Mail is often the first workload a company moves to the cloud, so it is unsurprising that this issue arises, especially as Exchange has included native support for mobile clients since the advent of the ActiveSync server in Exchange 2003 SP1 (the real action started with Exchange 2003 SP2).

Mobile Office 365 Clients

Of course, a wide range of other mobile clients are available for other Office 365 applications, as you can see from those installed on my iPhone (Figure 1).

iPhone Office 365 Apps

Figure 1: iPhone Pro for Office 365 (image credit: Tony Redmond)

The apps receive regular updates and are generally of a high quality. iOS tends to be a little ahead of Android when it comes to functionality, but that varies from app to app. My biggest complaint at present is that the Teams mobile app still does not support switching between tenants. That feature is “coming,” just like Christmas.

The Success of Exchange ActiveSync

Originally designed to evangelize connectivity between the nascent Windows smartphones and Exchange to compete with RIM BlackBerry, Microsoft’s focus soon shifted to licensing Exchange ActiveSync (EAS) to as many mobile device vendors as possible.

Since 2006, Microsoft has done a great job of licensing EAS to all and sundry. Today, EAS is the common connectivity protocol for mobile devices for both Exchange on-premises and Exchange Online. Even Microsoft’s most ardent competitors, Google and Apple, license EAS.

The Problem with Exchange ActiveSync

Good as EAS is at connecting to Exchange, it is now an old protocol. Although Microsoft refreshed EAS (to version 16.1) last year, the functionality available through EAS is much the same as it ever was – synchronizing folders, sending and receiving email, updating the calendar, and maintaining contacts. If this is what you need, then EAS is the right protocol. And because EAS works so well, mobile device vendors can easily integrate EAS into their email clients to make them with Exchange.

Except of course when new versions of an email app appear. Apple has a notable history of problems between the iOS mail app and Exchange, ranging from longstanding problems with calendar hijacking to issues with HTTP2 when IOS 11 appeared. To be fair to both Apple and Microsoft, the two companies work together to resolve problems more effectively now than they did in the past, but the problems illustrate some of the difficulties that can creep in when mobile device vendors implement EAS.

A New Mobile Strategy

Up to late 2014, Microsoft’s strategy for mobile devices centered around EAS. Recognizing the limitations of the protocol, they also had “OWA for Devices,” essentially putting a wrapper around a browser instance running OWA on mobile devices. OWA for Devices never went anywhere fast, even if it was the only way to get certain functions on mobile devices like support for encrypted email or access to shared mailboxes.

Then Microsoft bought Acompli for $200 million to transform their mobile strategy and get them out of the hole they were heading into with OWA for Devices. The Acompli apps for iOS and Android had built up a loyal fan base because the clients worked well with Exchange, Gmail, and other servers, and included some unique functionality like the Focused Inbox, which is now available throughout the Outlook family.

Microsoft rebranded the Acompli apps as Outlook for iOS and Android in January 2015. After weathering an initial storm caused by some misleading assertions by security experts, two problems remained. First, the Outlook apps used EAS, but only to retrieve information from Exchange mailboxes and store the data on Amazon Web Services (AWS). Second, the clients used their own protocol to interact with the AWS store.

In 2016, Microsoft began to move the Outlook data from AWS to a new architecture based on Office 365 and Azure. Soon, the clients will use the same architecture to deliver the same functionality for Exchange on-premises.

The Outlook mobile clients still use their own protocol to communicate with Exchange. Why? The EAS protocol does not support all the functionality that the Outlook clients deliver, including the Focused Inbox, full mailbox search, and (most recently) protected email.

The way that Outlook deals with protected email is important. If you chose to protect email with Azure Information Protection, messages accessed through Outlook mobile client are more secure. By comparison, “unenlightened” clients like the Apple iOS mail app must remove that protection to store and display email. Coupled with Office 365 Multi-Factor Authentication (and the Microsoft Authenticator app), Outlook is a good choice for those who need the highest level of mobile email security available in Office 365.

Although the Outlook clients sometimes work differently to the way I would like than I would like, they are the best mobile client for Exchange Online.

In summary, EAS is now the lowest common denominator for Exchange connectivity while all the new features and functionality appear in the Outlook clients.

Why Microsoft Will Not Upgrade EAS

Those who like using native email clients like the iOS mail app probably wonder why Microsoft doesn’t upgrade EAS to support the functionality needed by Outlook.

In a nutshell, Microsoft could upgrade EAS, but the engineering effort to do so cannot be justified. First, their own clients would then have to be retrofitted to use the “new EAS.” Second, no guarantee exists that mobile device vendors would upgrade their mail apps to exploit the features exposed by an upgraded API. Microsoft could ask the likes of Samsung, Apple, and Google to support new features, but it is likely that they would not.

The upshot is a lot of expense for Microsoft with no prospect of any positive outcome.

Sponsored

Into the Future

My answer to people who ask about mobile apps for Office 365 is that if users are happy with the native mail apps, then continue with that course. The users don’t realize what they are missing. On the other hand, if you want users to have the best functionality, you need to use the Outlook clients. That is where Microsoft’s focus is today, and it is where new features will appear in the future. Hopefully, Microsoft will deliver some long-awaiting functionality, like support for shared mailboxes, soon.

And don’t forget the other mobile apps for Office 365. With such a selection available today, I don’t know how we ever managed to do any work on the road in the past…

Follow Tony on Twitter @12Knocksinna.

Want to know more about how to manage Office 365? Find what you need to know in “Office 365 for IT Pros”, the most comprehensive eBook covering all aspects of Office 365. Available in PDF and EPUB formats (suitable for iBooks) or for Amazon Kindle.

The post Choosing the Best Mobile Office 365 Email Client appeared first on Petri.

How to TAG files in Windows 10 & use it to make File Search efficient

The content below is taken from the original ( How to TAG files in Windows 10 & use it to make File Search efficient), to continue reading please visit the site. Remember to respect the Author & Copyright.

While Windows 10 has a powerful search inbuilt into the system, especially with Cortana which allows you to search smartly using filters like music, images, PDF and so on. One of the most underrated, but efficient way to search files […]

This post How to TAG files in Windows 10 & use it to make File Search efficient is from TheWindowsClub.com.

Boring Company to start selling LEGO-like interlocking bricks made from tunneling rock

The content below is taken from the original ( Boring Company to start selling LEGO-like interlocking bricks made from tunneling rock), to continue reading please visit the site. Remember to respect the Author & Copyright.

Elon Musk announced that the Boring Company will sell LEGO-like interlocking bricks made from rock that his tunneling machines excavate from the earth. Musk stated these bricks will be sold in “kits” and will be rated to withstand California’s earthquakes. 

The date of availability or cost of this latest product is unknown. While his company has only started digging shorter tunnels there is not enough upturned rock to begin making these bricks yet. 

Celebrate World Backup Day on March 31, 2018 – Are You Ready?

The content below is taken from the original ( Celebrate World Backup Day on March 31, 2018 – Are You Ready?), to continue reading please visit the site. Remember to respect the Author & Copyright.

There’s a “DAY” for almost anything these days, but here’s one that should be on your calendar – World Backup day , March 31, 2018. Whether its… Read more at VMblog.com.

Why PowerShell is a Core Skill for Office 365 Administrators

The content below is taken from the original ( Why PowerShell is a Core Skill for Office 365 Administrators), to continue reading please visit the site. Remember to respect the Author & Copyright.

PowerShell Office 365

PowerShell Office 365

Office 365 Pros Know PowerShell

Because I come from the Exchange side of the Office 365 house, PowerShell is a natural tool for me to turn to whenever I need to do something with Office 365 that Microsoft hasn’t included in the admin tools. The PowerShell coverage for Exchange is deep and extensive, even in the cloud. By comparison, PowerShell is not well covered in other Office 365 applications. Skype for Business Online has some administration functions while SharePoint Online offers mediocre support. Planner has no support, and the first version of the Teams PowerShell module could be so much better.

Given the spotty coverage in other parts of the service, I guess it should come as no surprise that Office 365 administrators who do not have a background in Exchange might consider PowerShell to be an odd but sometimes useful command-line interface. But that’s not the case. Simply put, PowerShell is a core skill for Office 365 administrators.

PowerShell Quirks

It’s true that PowerShell has its quirks. Like any scripting language, PowerShell syntax can be baffling and obscure, so using an IDE is the best approach for someone starting out. Writing raw PowerShell in the console is for masochists.

PowerShell has significant scalability limitations too, especially inside Office 365 where throttling controls clamp down on anyone who tries to consume resources with abandon. PowerShell will not process tens of thousands of objects rapidly, but that’s not its purpose.

If you think you need to process large numbers of Office 365 objects, listen to the recording of the seminar by MVPs Alan Byrne and Vasil Michev. The techniques they explain will help you get the job done, but it won’t be quick.

Why Admins Need PowerShell

The reasons why Office 365 administrators need to achieve a basic level of competency with PowerShell are varied. Here’s my top pick.

The Office 365 Admin Tools are Not Perfect

Beauty is in the eye of the beholder and Microsoft probably thinks that its admin tools are just fine, but some of the more interesting jobs you might want to do need you to plunge into PowerShell. A recent example is the provision of cmdlets to recover deleted items for users without the need to log into their accounts.

Another is the support article cited in my article on GDPR data spillage. The list of steps needed to discover and report all the holds in place for a mailbox that must be temporarily lifted to remove items is long and prone to error. Scripting the retrieval and release of holds for a mailbox would automate the process and make it easier to stand over in court, should the need arise to justify the removal of held information. Finally, I point to the need to enable mailbox auditing for new mailboxes to ensure audit data flows into the Office 365 Audit Log. This problem has been around for years and it’s surprising that Exchange Online does not enable auditing by default. But you can, with PowerShell.

Microsoft Cannot Anticipate Every Possible Admin Task

Try to write down all the tasks that you think an Office 365 Admin will perform in a year. Once you get past the easy stuff like creating accounts, monitoring usage reports, and so on, it becomes increasingly difficult to anticipate just what admins will be called upon to do. The Office 365 Admin Center and the other associates consoles represent a lot of functionality, but there’s always the possibility that you might have to do something that isn’t available as a menu choice in a GUI.

Two recent examples are how to archive inactive Office 365 Groups (and Teams) and how to identify when Groups and Teams are not being used. Microsoft offers the Azure Active Directory expiration policy for Groups, but this is based on time (that is, a group expires after a set period) instead of activity, which creates the possibility that Office 365 could expire and remove your most important teams or groups even though they are in active use daily. You can easily recover the expired groups (within 30 days), but that’s not the point. It’s better to understand what groups and teams are active and act on that basis.

Some Office 365 Features need PowerShell

The group expiration policy has a GUI (in the Azure portal) to work with its settings, but many Office 365 features need admins to run some PowerShell commands to set things up. The Office 365 Groups policy is a good example. If you want to set up a naming policy or restrict group creation to a defined set of users, you need PowerShell.

PowerShell Helps You Understand Office 365 Better

Understanding how a technology works is a great way to master it. For instance, running the Get-MailboxStatistics cmdlet against a group mailbox reveals its contents. You might or might not be interested in this information, but it is surprising how often detail like this has proven invaluable.

PowerShell Is Not Hard

I am not a programmer now. I used to be, with VAX COBOL and VAX BASIC, in the last millennium, but I can cheerfully hack away with PowerShell and get stuff done. Anyone can too. It’s not hard and a ton of useful examples and advice exists on the web (here’s a good start). Of course, you should never download and run a script in your production environment without carefully examining (and understanding) the code first, but that does not take away from the point that you are not alone.

Sponsored

PowerShell is Fun

Perhaps oddly, PowerShell can be fun too. A sense of achievement comes when a recalcitrant script finally works to make Office 365 give up some secrets or some piece of data becomes more understandable. Although Microsoft might create a perfect nirvana of administration within Office 365, tenant admins need some competence with PowerShell for the foreseeable future. The sooner you start, the better you’ll be.

Follow Tony on Twitter @12Knocksinna.

Want to know more about how to manage Office 365? Find what you need to know in “Office 365 for IT Pros”, the most comprehensive eBook covering all aspects of Office 365. Available in PDF and EPUB formats (suitable for iBooks) or for Amazon Kindle.

The post Why PowerShell is a Core Skill for Office 365 Administrators appeared first on Petri.

Polish bank begins using a blockchain-based document management system

The content below is taken from the original ( Polish bank begins using a blockchain-based document management system), to continue reading please visit the site. Remember to respect the Author & Copyright.

A blockchain company called Coinfirm has announced a partnership with PKO BP, a major Polish bank, to provide blockchain-based document verification using a tool called Trudatum. The project is a an actual implementation of one of the primary benefits of blockchain-based tools, namely its ability to permanently and immutably store data. This announcement brings blockchain implementations out of the realm of proof-of-concept and into the real world.

“Every document recorded in the blockchain (e.g. proof of a transaction, or bank’s terms and conditions for a given product) will be issued in the form of irreversible abbreviation or hash signed with the bank’s private key. This will allow a client to verify remotely if the files he received from a business partner or from the bank are true, or if a modification of the document was attempted,” wrote the Coinfirm team.

Coinfirm founders Paweł Kuskowski, Pawel Aleksander, and Maciej Ziółkowski have experience in cryptocurrency and banking and they bootstrapped the company over the past two years. They also run a blockchain-based AMC/KYC platform for investments that is reaching the break-even point. They entered the world of blockchain after becoming frustrated with banking but the industry sucked them back in.

“Together with Pawel Aleksander we decided to leave the banking world as we saw that the AML process in the financial industry is broken – it’s very arbitrary, takes thousands of people, and has a very low efficiency,” said Kuskowski. “Our early observation of the digital currency space and it’s challenges showed a huge need for AML solutions. Also because of the nature of the ledgers we could create a data driven machine-learning based software as opposed to the people-based process prone to human error and subjectivity that is the standard for the banking industry. Once we understood the blockchain technology better we continued to launch new products that are using it to solve compliance challenges – starting with the Coinfirm AML/KYC Platform, and then Trudatum.”

The Trudatum tool essentially allows PKO BP to create “durable media” – “a digital solution for storing all agreements with clients that is now required by the law.”

“Every document recorded in the blockchain (e.g. proof of a transaction or bank’s terms and conditions for a given product) will be issued in the form of irreversible abbreviation („hash”) signed with the bank’s private key. This will allow a client to verify remotely if the files he received from a business partner or from the bank are true or if a modification of the document was attempted,” said Kuskowski.

For their part, PKO BP is pleased with the pilot project, making it one of the first European banks to publicly admit that they’re using a blockchain tool for document management.

“Coinfirm is one of the startups that we discovered thanks to the ‘Let’s Fintech with PKO Bank Polski’ acceleration process,” said Adam Marciniak, a Vice President at PKO BP. “It already has considerable experience in blockchain technology acquired in several countries. Last year we started tests of the Trudatum platform developed by Coinfirm. As tests in the banking environment were highly satisfying, we decided to cooperate more closely. We believe that together we will be able to carry out a pioneering operation of implementing blockchain technology into the Polish banking sector.”

This electric 1959 Mini Cooper is everything that’s right in the world

The content below is taken from the original ( This electric 1959 Mini Cooper is everything that’s right in the world), to continue reading please visit the site. Remember to respect the Author & Copyright.

Take a break from the dumpster fire that is 2018. This electric Mini will make you smile.

Built as a show piece, the car features an electric powertrain in a restored 1959 Mini Cooper. Of course it’s red with a white stripe, and, and of course, there are rally lights across the grill. This is how a Mini should look, and an electric powertrain should make it feel the part, too.

Minis are supposed to be oversized go-karts that go like mad with near-instant acceleration. And that’s the best part of electric vehicles: instant torque that produces insane acceleration.

Mini hasn’t revealed the range or capabilities of this show car. Its purpose is mostly to draw attention to Mini’s other electric vehicles and concepts. Mini has been producing electric vehicles since 2008 when it created a limited run of Mini E, which was used to make the BMW i3. More recently Mini announced the Mini Electric Concept and intends to put it on the market by 2019.

But forget about that new car that’s sure to be overloaded with screens, LEDs and silly things like airbags. None of that stuff will make people smile as much as a classic Mini Cooper.

How to customize Notifications and Action Center on Windows 10

The content below is taken from the original ( How to customize Notifications and Action Center on Windows 10), to continue reading please visit the site. Remember to respect the Author & Copyright.

We all use our PC for work, and getting distracted for any reason does break the concentration. Just like your Phone, Windows 10 Apps & System does send out notifications. They are there for a reason, but if they are […]

This post How to customize Notifications and Action Center on Windows 10 is from TheWindowsClub.com.

Build your own PC inside the PC you built with PC Building Simulator

The content below is taken from the original ( Build your own PC inside the PC you built with PC Building Simulator), to continue reading please visit the site. Remember to respect the Author & Copyright.

Considering we’ve got simulators for everything from driving a junker (x2) to moving into a neighborhood with a bunch of hot dads in it, I suppose it was only a matter of time until someone made a game where you assemble your own PC. It’s called PC Building Simulator, as you might guess, and it looks fabulous.

I’ve built all my PCs over the years, including my current one, which I really should have waited on, since the early Skylake mobos were apparently trash. I’m sure we can line up the screw holes better than that, MSI!

What was I talking about? Oh yes, the simulator. This is no joke game: it uses real, licensed parts from major manufacturers, which are (or will be) simulated down to their power draws, pins, draw counts and so on. So if you pick a power supply without enough molex connectors to handle your SLI rig and PCIe solid state system drive (or whatever), it won’t start. Or if you try to close the ultra-slim case with an 8-inch-tall heatsink on your overclocked CPU, it’ll just clank. (Some of these features are still in development.)

Add LEDs inside the case, replace the side panel with acrylic (no!), try out a few cooling solutions… the possibilities are endless. Especially since manufacturers like Corsair, AMD AMD, and so on seem hot to add perfectly modeled virtual versions of their components to the selection.

There’s even a “game” aspect where you can start your own PC repair business — someone sends you a machine that won’t boot, or shuts down randomly, and you get to figure out why that is. Run a virus scan, reseat the RAM, all that. Damn, this sounds just like my actual life.

Seriously though, this is great — it might help more people get over the idea that building a PC is difficult. I mean, it is, but at least here you can go through the motions so it isn’t a total mystery when you give it a shot.

The best part is that this game is made by a teenager who put together the original as a lark (it’s free on itch.io) itch.io) and attracted so much attention that it’s been blown up into a full-blown game. Well, an Early Access title, anyway.

SUSE bakes a Raspberry Pi-powered GNU/Linux Enterprise Server

The content below is taken from the original ( SUSE bakes a Raspberry Pi-powered GNU/Linux Enterprise Server), to continue reading please visit the site. Remember to respect the Author & Copyright.

Industry can have a slice of steaming supported stability … if it can afford to pay

SUSE Linux Enterprise Server 12 SP3 (SLES) has been released for the diminutive Raspberry Pi computer.…

Yes, You Can Use Your On-Premises Data with Office 365

The content below is taken from the original ( Yes, You Can Use Your On-Premises Data with Office 365), to continue reading please visit the site. Remember to respect the Author & Copyright.

Say what? What! <edited out the back and forth dialog in my head between Samuel L. Jackson and some poor kid in an inappropriate-for-work movie that is playing in my head>

 

 

For a lot of people, hybrid just means Active Directory and works in both the cloud and on-premises. Then all of the other IT functionality is purely online or purely on-premises. Exchange is hosted in O365. They have SharePoint in both but neither talks to each other. There are a whole host of other solutions and calling them hybrid can be a stretch. I think of it more like, they have two data centers that just happen to have the same username and password.

Well, it doesn’t have to be that way. One of the smartest pieces of technology Microsoft has created in years flies under the radar but it can get you out of this “here or there” mentality. The software is called the On-Premises Data Gateway.

The On-Premises Data Gateway

The Data Gateway (that is what we are going to call it to get down on wordiness) allows you to connect to your on-premises data to several tools in the Microsoft Cloud. PowerApps, Power BI, Microsoft Flow, Azure Logic Apps, Azure Analysis services, and probably a few others all natively support the Data Gateway. This allows you to truly bridge the gap and build hybrid environments. It is also the answer to how do you deal with the fact that amazing tools like PowerApps, Power BI, and Flow will never come on-premises. No problem, you can bring on-premises to them.

Now you might be thinking, why is it you haven’t heard of this product before? Why can’t Shane tell us exactly what platforms it works with and where to get more info. Turns out, in my opinion, this is the only flaw with the Data Gateway. It’s greatest feature, is its biggest weakness.

The Best Feature

You only need one Data Gateway installed on-premises and it works with all of the tools I listed above. What simple form of genius is this? I am shocked that this single tool was built and got all of these other teams to play nice with it. Better news? Installing it takes about 5 minutes and there is almost nothing to configure. So why is this amazing news also a weakness?

The reason you haven’t heard of the tool and that the details are hard to come by is because the Data Gateway doesn’t have a standalone website. All of the documentation for the product is hidden on the various websites of the products it works with. This can be good if you only care about it from one point of view but I am a harlot when it comes to this tech. I want all up documentation. Sad face.

If you want more info or to install the product, then you need to go to the various product sites to do so. The nice thing is this gives you product-specific context. Here are the sites that I know of:

Are you overwhelmed by that? I was. So many Gateways but remember you only need one. So after you setup the Gateway for PowerApps, you are done. If next week you decide you want to also use on-premises data with Power BI, no problem. You do NOT have to install another gateway.

Speaking of installs. Because I love you, I have also gotten into this chaos. I have made two different videos. One is from the PowerApps point of view and one is from the Power BI angle. I found it makes a lot more sense to install, configure, and build something using the gateway if you do it product specific. Enjoy.

Licensing

There is no license required to install the Data Gateway. Instead, the various products are where the licensing, if any, are handled. For example, with PowerApps the Data Gateway is available to almost everyone. The only exception is being users licensed as Office Business, Office Enterprise E1, and Office 365 Enterprise F1. Once again, the downside is I can’t point you to one place where that is called out. So make sure as you make plans to use this great tool, you dig on the licensing. This is especially true if you are in a mixed licensing environment. You will most likely be fine but I want you to look before you leap. Random fact? Power BI even has a personal Gateway mode. So cool.

Security

I have bad news for your security and firewall teams. We don’t need them. Hooray. The very non-nerdy explanation for the way this works is the Data Gateway calls out to Azure Service Bus looking for work. Now if you have super outbound security and proxy servers (like we did in the late 90’s), then you may have to make some changes. But if I were you, I would not over think it, just install the Data Gateway and give it a try. If it cannot find the Azure Service Bus (the internet), it will let you know. Then you can try to configure around it.

Sponsored

There Is So Much Fun to be Had Here. What Are You Waiting For?

Imagine a Power BI dashboard showing on-premise and online data in the same report. Imagine workflows that take SharePoint content from on-premises and publish it to the cloud. The sky is the limit. You just need to deploy one instance of the Data Gateway and then you can go crazy. Leave me comments below and tell me what doors this opens for you. I love success stories.

 

Shane

@ShanesCows

The post Yes, You Can Use Your On-Premises Data with Office 365 appeared first on Petri.

DJI will let developers fully customize its drones

The content below is taken from the original ( DJI will let developers fully customize its drones), to continue reading please visit the site. Remember to respect the Author & Copyright.

Drone company DJI is expanding its efforts in the commercial sector with a new thermal imaging camera and a payload software development kit (SDK) that will allow startups and developers to integrate custom gear onto DJI drones.

Now, you can automatically document your API with Cloud Endpoints

The content below is taken from the original ( Now, you can automatically document your API with Cloud Endpoints), to continue reading please visit the site. Remember to respect the Author & Copyright.

With Cloud Endpoints, our service for building, deploying and managing APIs on Google Cloud Platform (GCP), you get to focus on your API’s logic and design, and our team handles everything else. Today, we’re expanding “everything else” and announcing new developer portals where developers can learn how to interact with your API.

Developer portals are the first thing your users see when they try to use your API, and are an opportunity to answer many of their questions: How do I evaluate the API? How do I get working code that calls the API? And for you, the API developer, how do you keep this documentation up-to-date as your API develops and changes over time?

Much like with auth, rate-limiting and monitoring, we know you prefer to focus on your API rather than on documentation. We think it should be easy to stand up a developer portal that’s customized with your branding and content, and that requires minimal effort to keep its contents fresh.

Here’s an example of a developer portal for the Swagger Petstore (YAML):

The portal includes, from left to right, the list of methods and resources, any custom pages that the API developer has added, details of the individual API method and an interactive tool to try out the API live!

If you’re already using Cloud Endpoints, you can start creating developer portals immediately by signing up for this alpha. The portal will always be up-to-date; any specification you push with gcloud also gets pushed to the developer portal. From the portal, you can browse the documentation, try the APIs interactively alongside the docs, and share the portal with your team. You can point your custom domain at it, for which we provision an SSL certificate, and add your own pages for content such as tutorials and guides. And perhaps the nicest thing is that this portal works out of the box for both gRPC and OpenAPI—so your docs are always up-to-date, regardless of which flavor of APIs you use.

Please reach out to our team if you’re interested in testing out Cloud Endpoints developer portals. Your feedback will help us shape the product and prioritize new features over the coming months.

Google Home’s multi-room audio now works with Bluetooth speakers

The content below is taken from the original ( Google Home’s multi-room audio now works with Bluetooth speakers), to continue reading please visit the site. Remember to respect the Author & Copyright.

Google Home is getting a long-awaited feature: Bluetooth. Previously, only Google Cast-enabled speakers could be looped in to a network of Home-commanded devices. Now users can pair their speaker of choice with the dedicated Home app and voice comman…

Take charge of your sensitive data with the Cloud Data Loss Prevention (DLP) API

The content below is taken from the original ( Take charge of your sensitive data with the Cloud Data Loss Prevention (DLP) API), to continue reading please visit the site. Remember to respect the Author & Copyright.

This week, we announced the general availability of the Cloud Data Loss Prevention (DLP) API, a Google Cloud security service that helps you discover, classify and redact sensitive data at rest and in real-time.

When it comes to properly handling sensitive data, the first step is knowing where it exists in your data workloads. This not only helps enterprises more tightly secure their data, it’s a fundamental component of reducing risk in today’s regulatory environment, where the mismanagement of sensitive information can come with real costs.

The DLP API is a flexible and robust tool that helps identify sensitive data like credit card numbers, social security numbers, names and other forms of personally identifiable information (PII). Once you know where this data lives, the service gives you the option to de-identify that data using techniques like redaction, masking and tokenization. These features help protect sensitive data while allowing you to still use it for important business functions like running analytics and customer support operations. On top of that, the DLP API is designed to plug into virtually any workload—whether in the cloud or on-prem—so that you can easily stream in data and take advantage of our inspection and de-identification capabilities.

In light of data privacy regulations like GDPR, it’s important to have tools that can help you uncover and secure personal data. The DLP API is also built to work with your sensitive workloads and is supported by Google Cloud’s security and compliance standards. For example, it’s a covered product under our Cloud HIPAA Business Associate Agreement (BAA), which means you can use it alongside our healthcare solutions to help secure PII.

To illustrate how easy it is to plug DLP into your workloads, we’re introducing a new tutorial that uses the DLP API and Cloud Functions to help you automate the classification of data that’s uploaded to Cloud Storage. This function uses DLP findings to determine what action to take on sensitive files, such as moving them to a restricted bucket to help prevent accidental exposure.

In short, the DPI API is a useful tool for managing sensitive data—and you can take it for a spin today for up to 1 GB at no charge. Now, let’s take a deeper look at its capabilities and features.

Identify sensitive data with flexible predefined and custom detectors

Backed by a variety of techniques including machine learning, pattern matching, mathematical checksums and context analysis, the DLP API provides over 70 predefined detectors (or “infotypes”) for sensitive data like PII and GCP service account credentials.

You can also define your own custom types using:

  • Dictionaries — find new types or augment the predefined infotypes 
  • Regex patterns — find your own patterns and define a default likelihood score 
  • Detection rules — enhance your custom dictionaries and regex patterns with rules that can boost or reduce the likelihood score based on nearby context or indicator hotwords like “banking,” “taxpayer,” and “passport.”

Stream data from virtually anywhere

Are you building a customer support chat app and want to make sure you don’t inadvertently collect sensitive data? Do you manage data that’s on-prem or stored on another cloud provider? The DLP API “content” mode allows you to stream data from virtually anywhere. This is a useful feature for working with large batches to classify or dynamically de-identify data in real-time. With content mode, you can scan data before it’s stored or displayed, and control what data is streamed to where.

Native discovery for Google Cloud storage products

The DLP API has native support for data classification in Cloud Storage, Cloud Datastoreand BigQuery. Just point the API at your Cloud Storage bucket or BigQuery table, and we handle the rest. The API supports:

  • Periodic scans — trigger a scan job to run daily or weekly 
  • Notifications — launch jobs and receive Cloud Pub/Sub notifications when they finish; this is great for serverless workloads using Cloud Functions
  • Integration with Cloud Security Command CenterAlpha
  • SQL data analysis — write the results of your DLP scan into the BigQuery dataset of your choice, then use the power of SQL to analyze your findings. You can build custom reports in Google Data Studio or export the data to your preferred data visualization or analysis system.
A summary report of DLP findings on recent scans

Redact data from free text and structured data at the same time

With the DLP API, you can stream unstructured free text, use our powerful classification engine to find different sensitive elements and then redact them according to your needs. You can also stream in tabular text and redact it based on the record types or column names. Or do both at the same time, while keeping integrity and consistency across your data. For example, you can take a social security number that’s classified in a comment field as well as in a structured column, and it generates the same token or hash.

Extend beyond redaction with a full suite of de-identification tools

From simple redaction to more advanced format-preserving tokenization, the DLP API offers a variety of techniques to help you redact sensitive elements from your data while preserving its utility.

Below are a few supported techniques:

Transformation type
Description
Replacement
Replaces each input value with infoType name or a user customized value
Redaction
Redacts a value by removing it
Maskor partial mask
Masks a string either fully or partially by replacing a given number of characters with a specified fixed character
Pseudonymization
with cryptographic hash
Replaces input values with a string generated using a given data encryption key
Pseudonymization
with format preserving token
Replaces an input value with a “token,” or surrogate value, of the same length using format-preserving encryption (FPE) with the FFX mode of operation
Bucketvalues
Masks input values by replacing them with “buckets,” or ranges within which the input value falls
Extract time data
Extracts or preserves a portion of dates or timestamps

The Cloud DLP API can also handle standard bitmap images such as JPEGs and PNGs. Using optical character recognition (OCR) technology, the DLP API analyzes the text in images to return findings or generate a new image with the sensitive findings blocked out.

Measure re-identification risk with k-anonymity and l-diversity

Not all sensitive data is immediately obvious like a social security number or credit card number. Sometimes you have data where only certain values or combinations of values identify an individual, for example, a field containing information about an employee’s job title doesn’t identify most employees. However, it does single out individuals with unique job titles like “CEO” where there’s only one employee with this title. Combined with other fields such as company, age or zip code, you may arrive at a single, identifiable individual.

To help you better understand these kinds of quasi-identifiers, the DLP API provides a set of statistical risk analysis metrics. For example, risk metrics such as k-anonymity can help identify these outlier groups and give you valuable insights into how you might want to further de-identify your data, perhaps by removing rows and bucketing fields.

Use k-anonymity to help find identifiable individuals in your datasets

Integrate the DLP API into your workloads across the cloud ecosystem

The DLP API is built to be flexible and scalable, and includes several features to help you integrate it into your workloads, wherever they may be.

  • DLP templates — Templates allow you to configure and persist how you inspect your data and define how you want to transform it. You can then simply reference the template in your API calls and workloads, allowing you to easily update templates without having to redeploy new API calls or code.
  • Triggers — Triggers allow you to set up jobs to scan your data on a periodic basis, for example, daily, weekly or monthly. 
  • Actions — When a large scan job is done, you can configure the DLP API to send a notification with Cloud Pub/Sub. This is a great way to build a robust system that plays well within a serverless, event-driven ecosystem.

The DLP API can also integrate with our new Cloud Security Command Center Alpha, a security and data risk platform for Google Cloud Platform that helps enterprises gather data, identify threats, and act on them before they result in business damage or loss. Using the DLP API, you can find out which storage buckets contain sensitive and regulated data, help prevent unintended exposure, and ensure access is based on need-to-know. Click hereto sign up for the Cloud Security Command CenterAlpha.

The DLP API integrates with Cloud Security Command Center to surface risks associated with sensitive data in GCP

Sensitive data is everywhere, but the DLP API can help make sure it doesn’t go anywhere it’s not supposed to. Watch this space for future blog posts that show you how to use the DLP API for specific use cases.

IBM, HPE tout new A.I.-oriented servers

The content below is taken from the original ( IBM, HPE tout new A.I.-oriented servers), to continue reading please visit the site. Remember to respect the Author & Copyright.

IBM and Hewlett Packard Enterprise this week introduced new servers optimized for artificial intelligence, and the two had one thing in common: Nvidia technology.

HPE this week announced Gen10 of its HPE Apollo 6500 platform, running Intel Skylake processors and up to eight Pascal or Volta Nvidia GPUs connected by NVLink, Nvidia’s high-speed interconnect.

A fully loaded V100s server will get you 66 peak double-precision teraflops of performance, which HPE says is three times the performance of the previous generation.

The Apollo 6500 Gen10 platform is aimed at deep-learning workloads and traditional HPC use cases. The NVLink technology is up to 10 times faster than PCI Express Gen 3 interconnects.

To read this article in full, please click here

battoexeconverter (3.0.10.0)

The content below is taken from the original ( battoexeconverter (3.0.10.0)), to continue reading please visit the site. Remember to respect the Author & Copyright.

Bat To Exe Converter can convert BAT (.bat) script files to the EXE (.exe) format.

This glass cabin in Iceland lets you sleep under the northern lights

The content below is taken from the original ( This glass cabin in Iceland lets you sleep under the northern lights), to continue reading please visit the site. Remember to respect the Author & Copyright.

Panorama Glass Lodge is a luxury vacation cabin in Hvalfjörðu, Iceland. Situated directly by the sea, visitors will experience stunning views of the Aurora Borealis from above and reflected off the water below. The structure features an all glass bedroom allowing travelers to experience sleeping under one of the world’s most spectacular light shows. 

Panorama Glass Lodge located in Hvalfjörðu, Iceland. Image: Panorama Glass Lodge.

Panorama Glass Lodge located in Hvalfjörðu, Iceland. Image: Panorama Glass Lodge.

This secluded cabin is the perfect viewing destination due to its removal from any light pollution. 

Panorama Glass Lodge located in Hvalfjörðu, Iceland. Image: Panorama Glass Lodge.

If that wasn’t enough to satisfy, the cabin also includes a hot tub to view the spectacle above. 

Panorama Glass Lodge located in Hvalfjörðu, Iceland. Image: Panorama Glass Lodge.

Plan group trips in Skype with help from TripAdvisor and StubHub

The content below is taken from the original ( Plan group trips in Skype with help from TripAdvisor and StubHub), to continue reading please visit the site. Remember to respect the Author & Copyright.

Bringing TripAdvisor into a group chat is pretty easy — just tap the Add to Chat button and select TripAdvisor from the list of available plug-ins. You can choose a destination, then search for restaurants, hotels and activities in the area. Sharing…

Windows 10 on ARM: Everything you need to know about it

The content below is taken from the original ( Windows 10 on ARM: Everything you need to know about it), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft earlier bought Windows operating system to ARM-based devices with Windows RT for x32 ARM Processors. Windows RT was first announced at CES 2011 and was released as a mobile operating system along with Windows 8 in October 2012. It […]

This post Windows 10 on ARM: Everything you need to know about it is from TheWindowsClub.com.

[Sponsored] Overcoming Remote Desktop Challenges with Remote Desktop Manager

The content below is taken from the original ( [Sponsored] Overcoming Remote Desktop Challenges with Remote Desktop Manager), to continue reading please visit the site. Remember to respect the Author & Copyright.

In today’s corporate environment IT administrators typically need to manage many different remote systems. These systems can be physical systems or they might be VMs. Many times, these systems reside locally as well as in remote locations and in the cloud. For Windows IT administrators the Remote Desktop is the primary tool that the vast majority of IT administrators use every day for these necessary remote management tasks. Remote Desktop enables you to start an interactive session with a remote system that has been configured to allow Remote Desktop access. Remote Desktop opens a window on your local system that contains the desktop of the remote system that you connect to. Your mouse and keyboard actions are sent to the remote system and the interactive session allows you to operate and troubleshoot the remote system very much like you are sitting at a local display. This kind of control and interactive display is essential when you’re trying to troubleshoot problems or configure systems remotely.

In this post, you’ll learn about some of the challenges of using Remote Desktop to manage your enterprise servers and then see some of the best ways that you can address these issues. Many companies use Microsoft’s Remote Desktop Connection Manager for their remote Windows management requirements. However, Remote Desktop Connection Manager has several critical limitations in an enterprise desktop environment. You’ll see how you can address these limitations as well as how Devolutions Remote Desktop Manager provides an enterprise-ready feature set to address your remote management requirements.

Remote Desktop Management Challenges

While managing remote desktops is an essential daily task for most IT administrators it also presents some difficult challenges. Let’s take a closer look at the some of the three main remote desktop management challenges.

  • Managing multiple connectionsOne of the biggest challenges with Remote Desktop is managing and organizing multiple remote connections. Most administrators in medium and larger companies need to connect to dozens if not hundreds of remote systems which can be very difficult to manage. Using RDP files enables you to save your connection settings and optionally your authentication information. This works great for a few systems but it quickly gets very messy and potentially confusing when the number of remote connections grow into dozens or more. Attempting to manually manage connections can result in a lack of standardization, confusion and potential errors.
  • Securing your remote connections – The next biggest remote desktop management challenge is properly securing the remote connections. Just like a traditional desktop environment, passwords are your first line of defense in securing your corporate infrastructure. All accounts with access to Remote Desktop Connections need to require strong passwords. To simplify management, some companies attempt to use the same passwords for multiple accounts – or worse resort to yellow sticky notes — which can create a huge security exposure. You need to ensure that your remote management network connections, passwords, and credentials are all secure. In addition, when you’re dealing with multiple remote systems and access by many different IT personal you need a way of logging access to those systems for auditing and troubleshooting.
  • Connecting to Linux and other heterogeneous hosts – One of the other challenges with remote desktop management is connecting to heterogeneous host systems. Today very few companies only have Windows systems to manage. Instead, most business are using a mix of Windows, Linux, Mac and other non-Windows systems. For most Windows administrators this means they need to use multiple remote management tools. Remote Desktop is limited to the RDP protocol which for the most part restricts its use to Windows systems. While some Linux distributions can be managed with RDP most cannot. This often requires that the administrator has to incorporate multiple tools like VNC, Putty and Apple Remote Desktop in addition to Windows Remote Desktop.

There are several different paths that you can take to clear these hurdles in remote desktop management. You can try to manually organization multiple .rdp files into separate folders with different permission but this can be extremely cumbersome and difficult for large numbers of connections. Instead, many businesses opt to use Microsoft’s Remote Desktop Connection Manager or a third party remote desktop manger like Devolutions Remote Desktop Manager to more effectively manage their remote desktop connections. Let’s take a closer look at using Microsoft’s Remote Desktop Connection Manager and Devolutions Remote Desktop Manager to handle your remote connection requirements.

Microsoft Remote Desktop Connection Manager

One tool that many IT administrators use to help manage their remote desktop management needs is Microsoft’s Remote Desktop Connection Manager. Microsoft’s Remote Desktop Connection Manager (RDCMan) is a free download and it can help you to manage multiple remote desktop connections by centralizing all of your remote desktop connections under a single management console. RDCMan is supported on Windows 10 Tech Preview (Windows 10), Windows 7, Windows 8, Windows 8.1, Windows Server 2008, Windows Server 2008 R2, Windows Server 2012, Windows Server 2012 R2, Windows Server Tech Preview (Windows Server 2016).  RDCMan is a basic tool whose main purpose is to help you to organize remote connections under a single console. You can see an example of Microsoft’s Remote Desktop Connection Manager in Figure 1.

Figure 1 – Microsoft Remote Desktop Connection Manager

As you can see in Figure 1 RDCMan allows you to create groups of different remote systems that you can connect to. Each group is stored in a separate .rdg file which can be exported and shared by other users. It’s important to realize that these are all separate files which can make implementing batch changes and sharing them with multiple users very cumbersome. RDCMan also offers the ability to encrypt your stored credentials using certificates. Remote connections have the ability to inherit settings from the group they are part of or you can customize each remote session. By default, the remote display is rendered in the main frame of the RCDMan console but you also have the option of undocking the remote session. RCDMan is primarily a Windows management tool. It can use RDP to connect to remote Windows sessions and it can also connect to Hyper-V VM console sessions using VMConnect.

RDCMan is adequate for a managing a small number of systems. Unfortunately, it has a number of significant limitations when used in a medium, large business and enterprise scenarios. Some of the main limitations for RCDMan include:

  • No support for Linux and Mac desktops – RDC is primarily designed to be a Windows-only management tool and it doesn’t support the full range of heterogenous servers that are in most businesses today.
  • It is not officially supported by Microsoft – One important thing than many administrators don’t realize is that RDCMan is not an official Microsoft product and it is not supported by Microsoft nor is it kept current. The last update for RDCMan was in 2014.
  • Manual credential entry – RDCMan requires you to manually enter the credential data for your remote sessions. This can be time consuming and can result in errors.
  • Remote Desktop only – RDCMan does not provide any other additional networking tools or capabilities.

Devolutions Remote Desktop Manager

Most businesses have remote desktop management needs that go beyond the basic capabilities provided by Microsoft’s free offering. Devolutions Remote Desktop Manager (RDM) provides the ability to manage multiple remote desktop connections. In addition, RDM offers a far more extensive set of tools and enterprise-level management capabilities then Microsoft’s RDCMan.  Let’s take a closer look at some of the main remote management capabilities and tools offered by RDM.

First, RDM is supported on almost all of today’s popular Windows desktop and server platforms including: Windows Vista SP2, Windows 7 SP1, Windows 8, Windows 8.1, Windows 10, Windows Server 2008 SP2, Windows Server 2008 R2 SP1, Windows Server 2012, Windows Server 2012 R2, and Windows Server 2016. The latest version also requires the Microsoft .NET Framework 4.6. There are both 32-bit and 64-bit versions of RDM.

RDM provides a modern interface that makes use of a ribbon menu and a tabbed interface for each open remote session. You can see Devolutions Remote Desktop Manager in Figure 2.

Figure 2 – Devolutions Remote Desktop Manager

RDM is basically divided into a Navigation pane that you can see on the left side of Figure 2 and the Context Area that you can see on the right. The Navigation pane contains entries which can be remote desktop sessions like you can see in Figure 2 or it can contain a number of other types of entries such as Credentials, Contacts, Documents and Macro/Script/Tools. You can quickly create multiple entries by right-clicking on an entry and then selecting Duplicate Entry from the context menu. You can optionally change the default view in the Navigation pane from the node-tree style view that you see in Figure 2 to a tiled view or a details view. The Content Area is used to display different remote desktop sessions as well as the output of the various embedded commands and tools that are part of RDM.

In Figure 2 you can see that the remote desktop sessions entries are grouped under a data source. Data sources defines where the entries are stored and they can be shared between different users. You can organize all of your different remote session types under the different data sources. For instance, you might have a different data source for each remote location or business unit that you manage. RDM provides granular control over each connection, each group of connections, or each data source. By default, all of the open sessions appear in their own tabs in the Context Area. You can work with each data source and session by right-clicking on it in the Navigation pane. If you are using the tabbed display you can quickly switch between sessions by clicking the desired tab. RDM gives you the option of making your connections appear in tabbed interface or enabling them to be undocked like a standard Remote Desktop connection. For multiple monitor support, RDM enables you to create a container window that is separate from the main window and you can drag and drop open tabs onto the container window.

Group Management

Almost all medium businesses up through the enterprise have multiple people using remote desktop connections and these users are often separated into different locations or application management teams. To facilitate team access RDM’s session connection information can be stored in a number of different types of shared data sources. These data sources and their sessions can be shared by multiple team members. RDM supports the following shared data sources:

  • Amazon S3 – Can be shared in read-only mode. Basic support.
  • Devolutions Online Database – Basic support for micro teams (up to 3 users), Professional and Enterprise editions support larger teams.
  • Devolutions Server – Shared. Supports all features, such as attachments, connection log, Offline Mode and Security Management.
  • Dropbox – Can be shared in read-only mode.
  • FTP – Uses an XML file that can be shared in read-only mode.
  • Google Drive — Shared. Supports all features, such as attachments, connection log, Offline Mode and Security Management.
  • MariaDB — Supports all features, such as attachments, connection log, Offline Mode and Security Management.
  • Microsoft Access – Shared but not recommended as Microsoft doesn’t support it in the newest versions on Windows.
  • Microsoft SQL Azure — Shared. Supports all features, such as attachments, connection log, Offline Mode and Security Management.
  • Microsoft SQL Server – The recommended data source for multiple users. Shared. Supports all features, such as attachments, connection log, Offline Mode and Security Management.
  • MySQL — Shared. Supports all features, such as attachments, connection log, Offline Mode and Security Management
  • SFTP – Can be shared in read-only mode. Basic support.
  • SQLLite — Shared. Supports all features, such as attachments, connection log, Offline Mode and Security Management
  • WebDav — Can be shared in read-only mode. Basic support.

Data sources that are stored in the cloud are typically automatically backed up by the cloud provider. To protect sensitive data in your data sources you can lock the data source configuration before you deploy it. The offline mode allows you to connect to a local copy of the data source when the live database is unavailable. It can be used when a user is working from a disconnected network or when there is any kind of connectivity issue to the data source. RDM’s batch edit feature enables you to easily change the settings of multiple sessions in one operation. For instance, many companies have a 90-day password change cycle which can become a problem when you need to regularly change my passwords for multiple connections. RDM also allows you to associate keywords/tags for your entries facilitating easier searches for related entries.

Multiple Remote Host and Connection Types

Today’s IT infrastructures are typically anything but homogenous. In addition to managing Windows Servers most businesses also need to manage Linux servers and sometimes Mac systems as well. Plus, administrators often need to connect directly to Hyper-V or VMware VMs as well as use other services like FTP or VPNs. Microsoft’s RDCMan is essentially limited to RDP and cannot connect to a good portion of the systems that today’s IT administrators need to manage. The remote connectivity capabilities provided by Devolutions RDM address the full range of connectivity required by today’s businesses. In addition to Windows and RDP, RDM support multiple remote protocols like VNC for Linux connectivity, Apple Remote Desktop, Citrix ICA as, Hyper-V and VMware VMs as well as other remote control/management products like HP Integrated Lights Out (iLO) and LogMeIn. RDM enables you to consolidate your remote management using a single tool to connect to Windows, Linux and other heterogeneous remote systems. You can see the variety of RDM’s support remote connections in Figure 3.

Figure 3 – Remote Desktop Manager’s supported remote connection

To create a new remote session select the desired session type and then RDM will prompt you for that sessions specific configuration properties. As you can see in Figure 3, RDM’s wide array of supported remote sessions enables you address the full range of remote management needs for the enterprise. Some of the most commonly needed remote session types the RDM supports include:

  • Microsoft Remote Desktop (RDP) – For connections to Windows systems
  • VNC – For connections to Linux systems
  • Apple Remote Desktop – For connections to Apple systems
  • Telnet – For connections to various Windows and Linux Telnet hosts
  • FTP, SFTP, SCP & WinSCP – For connections to FTP hosts

Enterprise-level Security

Properly securing your remote desktop connections is essential because of the far-reaching access and administrative capabilities that they provide. RDM provides a number of enterprise-level security features that can enable you secure access to your remote sessions. Passwords are the first level of all security strategies and RDM provides a number of capabilities that can help you to manage remote session passwords. RDM provides centralized remote password management as well as password generation and enforcement of password policies. Centralizing all passwords and enterprise data in one secure location both helps administrators quickly access the information they need as well as enabling it to be kept in one secure location. RDM is able to enforce all of the essential password policies for remote sessions including:

  • Password history – Determines when an old password can be reused
  • Password age – Determines when a user must change their password
  • Minimum password length – Determines the minimum number of characters required for a password
  • Complexity requirements – Ensures that the password can’t contain the user name and that it must use at least three of the four possible character types: lowercase letters, uppercase letters, numbers, and symbols.

Another important security features that RDM provides is the built-in password analyzer. When you supply passwords for your remote sessions RDM’s password analyzer will automatically evaluate the passwords and notify you if they are strong or weak. RDM is also able to automatically generate strong secure passwords. Enabling the Password Audit policy allows you to track all password changes

To handle the remote access security for users with different job responsibilities and remote access requirements RDM provides a role-based security system that enables flexible granular protection. For instance, you might want to create different roles and security settings for your administrators, help desk personal or consultants. RDM’s role-based security enables security settings to be inherited. Child items and folders are automatically covered by a parent folder’s security settings. The specific permissions for a given item can be overridden. You can set permissions on a sub folder or item to override the parent item’s permissions.

RDM also provides several other important remote access security features. First, it has a check-in and check-out feature that enables an administrator to lock down access for a remote session. For instance, if you were performing a long lasting maintenance routine and you didn’t want to allow any other access to the system you could check out the session and other users couldn’t access it until it is checked back in. You can also restrict access to remote sessions based on time. For instance, you might only allow access to some remote sessions during business hours. RDM also supports two-factor authentication provides unambiguous identification. This feature is only available for the following data sources: SQLite, Online Database, Devolutions Server, MariaDB, Microsoft Access, SQL Azure, SQL Server and MySQL.

Logs are another important security feature that RDM provides. RDM logs the usage for all of your different remote sessions and actions. The logs record when sessions are opened and closed along with the duration of the session. They also record when entries are viewed or changed as well as who performed the action.

Remote Management Tools

Effective remote management requires more than just an interactive login to the remote system. In many cases you need to troubleshoot the network connectivity, check the configuration of a remote server or perform a variety of other management and troubleshooting tasks. In addition to remote desktop management, RDM provides a number of handy network management tools that you can use to manage your remote systems. You can see the collection of remote system management tools provided by RDM in Figure 4.

Figure 4- Remote Desktop Manager’s remote management toolset

As you can see in Figure 4 some of the tools provided by RDM’s Tools Dashboard include the ability to connect to Computer Management to the remote system, collect inventory information, perform Wake-On-Lan, run ping, continuous ping, trace route and netstat as well as list the open sessions to the remote systems. Running each of these tools displays the results in a new tab in RDM’s Content Area.

PowerShell Scripting

RDM also supports Windows PowerShell scripting which enables administrators to automate RDM management. RDM supplies a PowerShell Module called RemoteDesktopManager.PowerShellModule.dll which is located in the Remote Desktop Manager installation directory. You can use the Import-Module cmdlet to load the module into your PowerShell sessions. The RDM PowerShell module can be used to automate a wide variety of tasks including:

  • Connecting to data sources
  • Creating databases
  • Loading configurations files
  • Assigning credentials to entries
  • Retrieving session properties
  • Changing group folder and session properties
  • Setting customer roles
  • Importing and exporting CSVs

Enabling Enterprise-Level Remote Desktop Management

Remote desktop management is one of the most important tools used by today’s IT administrators. RDM from Devolutions goes far beyond the basic Windows connectivity offered by Microsoft’s RDCMan. RDM brings enterprise-grade features like connectivity to all the popular server platforms, group management, security and scripting to remote management. RDM lets you centralize all your remote connections, credentials and tools into a single remote management platform that can be securely shared by your administrators and other remote desktop users.

 

The post [Sponsored] Overcoming Remote Desktop Challenges with Remote Desktop Manager appeared first on Petri.