Never-Before-Seen Street Fighter II Combos Discovered After 26 years

The content below is taken from the original (Never-Before-Seen Street Fighter II Combos Discovered After 26 years), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft Shares Interesting Secure Azure Network Design

The content below is taken from the original (Microsoft Shares Interesting Secure Azure Network Design), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft recently shared a detailed design for a secure network (or DMZ) deployment in Azure, based on the United Kingdom’s cloud security principles. The focus of Microsoft’s article was on the UK, but we can use this design as a basis for other deployments.

 

 

UK Cloud Security Principles

The set of guidelines that steered the design that Microsoft has shared is from the UK government, and these principles are supposed to help potential cloud customers find and deploy a secure service in the cloud.

The secure network design that Microsoft created is intended to meet the requirements of the Cloud Security Principles, but when you look at what is in the design, you have to admit that anyone wanting a secure network in Azure should consider this design, no matter what country they are in. Even better, Microsoft has shared the JSON templates for deploying this design in your own Azure subscription!

The Microsoft Azure network design for UK Cloud Security Principles [Image Credit: Microsoft]

Understanding the Design

The design is quite comprehensive, and brings together many elements that have been shared by Microsoft in many disparate documents; this is why I like this “UK” network design.

Let’s start with management; the concept of a “jump box” is being used for remote management. A virtual machine is deployed into a separate management virtual network (VNet). The management VNET is peered with the operational VNet and this is where the service runs. Remote Desktop does not have to be opened up via NAT to each of the application/service virtual machines. Instead, when an administrator needs to log into one of the service virtual machines, they remote into the jump box and then remote, via the peered connection, to the required virtual machine in the operational VNet.

The service running the operational VNet is made available via two possible means (both or either can be deployed). A gateway is deployed in the operational VNet in a dedicated gateway subnet; this provides the Azure customer with private access to the service via either a VPN connection or an ExpresRoute (WAN) connection.

A web application gateway provides layer 7 load balanced access to web servers, with optional (still in preview) web application firewall functionality. An option that one might consider is to do the following to either supplement or replace the web application gateway:

  1. Create another subnet in the operational VNET
  2. Deploy load balanced (Azure load balancer) third-party network virtualization appliances for load balancing and/or security at layer 4 and layer 7

Every tier of the service is placed into its own subnet. An availability set ensures anti-affinity for the virtual machines, and a network security group enforces layer 4 security rules. Each network security group, one for each subnet, only allows the minimum required protocols and ports. For example, the web subnet allows TCP 80/443 from outside of the VNet and RDP from the jump box subnet, and nothing more. The Data subnet will allow database traffic from the Biz subnet and RDP from the jump box subnet, and nothing more. The ADDS subnet allows traffic in from subnets containing domain members and RDP from the jump box subnet, and nothing more.

A Web subnet contains two or more load-balanced web servers. A Biz subnet contains application servers, and a Data subnet contains database servers. Note that this separation of service tiers into different subnets uses Microsoft’s best practices for designing subnets and network security groups; each domain of security should have its own subnet, and each subnet should have its very own network security group.

Many Microsoft-based applications require an Active Directory domain. In some scenarios, where single sign-on is just the requirement, using Azure AD Connect or ADFS in conjunction with Azure AD Domain Services might suffice. But if you need to integrate on-premises and in-Azure machines and services, then you need to extend your Active Directory into Azure by using Azure virtual machines as domain controllers. In the above design, domain controllers are deployed into another subnet in the Operations VNet.

If you like this design, no matter where you are, then you can deploy it easily enough. Microsoft has shared it on GitHub as three JSON templates, which you can download and customize or deploy straight into your Azure subscription.

Sponsored

More Azure Architectures

If you like the idea of what Microsoft has done with the UK Cloud Principles, then you should check out the Azure Reference Architectures. Here you will find a number of virtual machine and VNet designs, accompanied by the JSON templates that you can deploy.

The post Microsoft Shares Interesting Secure Azure Network Design appeared first on Petri.

Microsoft Matter Center – Office 365 Legal Industry Case Management Add-in

The content below is taken from the original (Microsoft Matter Center – Office 365 Legal Industry Case Management Add-in), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft Matter Center is a (free) add-in to Office 365 to support the legal industry in case and content management leveraging the core capabilities of Office 365 and Microsoft Azure.  Microsoft developed this solution back in 2015 and has been updating the solution to fulfill on the ongoing case management needs of law firms leveraging the core functionality of secured and protected content storage, search, and collaboration. 

Key functionality in Matter Center:

  • Creation of Matters:  From a case management perspective, users of Office 365 can create cases that includes case description, case conflict verification, inclusion of the legal team (internal and external), and the upload and management of documents and emails relative to the case (Office 365 email, OneDrive, search, and SharePoint Online)
  • Ongoing Information Tracking:  Matter Center provides a centralized information tracking mechanism (OneNote) for all individuals working on the case to enter in conversations and share meeting and conversation notes to all members of the team
  • Shared Calendars and Conversations:  Leveraging group calendars and groups, Matter Center provides a centralized method of viewing and managing important filing dates, response timelines, individual and group meetings, deposition schedules, and tracked conversations.
  • Security and Encryption:  Content is protected using role-based security to allow and block access between cases, actively prevent matter materials to be accessed by individuals noted with case conflict, encrypts content both in transit and at rest, and logs and tracks content modifications and changes for ongoing review
  • Direct Integration with Outlook:  Matter Center leverages Microsoft’s Outlook add-in model that integrates directly into Outlook so that participants can access matter content (documents, past conversation documentation) and directly save content right into specific matter repositories without having to “toggle” between multiple applications

Video Demonstration of Matter Center

The best way to experience Matter Center is to actually see it, so I’ve uploaded a Video of Matter Center and a “day in the life” scenario usage of Matter Center.

As you’ve seen in the video, the simplicity of Matter Center is that is leverages native features in Office 365 that organizations with the Office 365 E3 license owns all of core functionality (Outlook emails, SharePoint document libraries, SharePoint team site landing pages functionality, Rights Management content encryption, integrated Search)

Key Benefit of Matter Center

The key value of Matter Center is its integration of all of these components and embedded access right within Outlook where legal teams have directly in their email interface the ability to access and save messages and documents without having to constantly toggle between interfaces.  Members of the legal team can also go to a landing page from any browser and search for cases and documents while in the office or remote, with selected (or all) content encrypted and accessible through user by user content rights protection.

Installation and Implementation of Matter Center

Since Matter Center is provided at no charge to organizations that already have and use Office 365 and Microsoft Azure.  The installation code can be downloaded off of GitHub with a step-by-step implementation guide available.  However I’d highly recommend that if you install Matter Center that you are pretty familiar with Office 365, SharePoint administration, PowerShell, Visual Studio application build and publishing, and both the Microsoft Azure classic and Portal consoles.

The documentation makes a lot of assumptions that you have familiarity of all these tools, if not, you can quickly get a bit overwhelmed with the implementation process.  The GitHub “issues” tab is well moderated with the most common questions answered, however consider leveraging the support of someone who has already implemented Matter Center to smooth out the implementation experience.  Also, it is HIGHLY recommended that for your first implementation of Matter Center that you install it in an Office 365 test/trial tenant against an Azure test/trial subscription.  That’ll allow you to fumble through the implementation without polluting a real production environment in your first go around of the solution.

If you have experience implementing Matter Center or working with someone who has experience and expertise with Matter Center, then most certainly implement it in your live production environment as the best way to experience the solution is to have it integrated into real working email, file sharing, and collaboration settings used on a regular basis.

For those implementing Matter Center, here’s a video where I walk through some of the various components you’ll be working with as part of the implementation and integration of Matter Center in Office 365 and in Azure (note: this is not a step-by-step implementation video, just a “fly by” of the tools and components that are part of the Matter Center environment).

And as an extension to the documentation that Microsoft provides for implementation, I’ve compiled a download that notes some of the “additional things” that I would supplement their document with (problems/issues you’ll likely encounter during the implementation that you’ll need to overcome to get the thing fully operational).

Summary

Matter Center is a good example how a simple interface can consolidate several tools built-in to Office 365 and Microsoft Azure to solve the specific business use case of matter management for an industry like the legal industry.  Users can go to one place to store and access documents that are securely encrypted in organized content libraries, and the users don’t even need to “go somewhere else” as Matter Center’s Outlook add-in brings the content straight into the Outlook screen.

Join the Network World communities on

Facebook

and

LinkedIn

to comment on topics that are top of mind.

What Is Azure AD Privileged Identity Management?

The content below is taken from the original (What Is Azure AD Privileged Identity Management?), to continue reading please visit the site. Remember to respect the Author & Copyright.

In today’s Ask the Admin, I’ll look at Azure Active Directory (AAD) Privileged Identity Management (PIM) and how it can help protect user identities in the cloud.

 

 

Privileged Identity Management is available to AAD Premium P2 subscribers and allows organizations to better control what users are doing with privileged accounts. Just like in an on-premises Active Directory (AD) environment, the use of privileged domain accounts, such as Domain Admins and Enterprise Admins, should be kept to a minimum. To help facilitate that, Windows Server 2016 includes a new feature called Just-In-Time (JIT) administration, which allows users to be granted privileges on a temporary, time-limited basis.

In AAD, Just-In-Time administration allows administrative privileges to be granted ‘on-demand’ to the directory and online services, such as Office 365 and Intune. Much of what Microsoft added to Windows Server 2016 was the result of features that were first appeared in Azure, so it should come as no surprise that JIT administration is also part of AAD. PIM also allows administrators to

  • See which AAD users are tenant administrators.
  • Run reports detailing changes and access attempts made by administrators.
  • Set up alerts for access to privileged roles.

Eligibility and Activation

When PIM is enabled for a tenant, users that occasionally need privileged access can be assigned the role of Eligible admin, and only when they complete ‘activation’ are their accounts granted elevated privileges for a set period. Users activate a role by logging in to the AAD management portal and start the activation process on the Privileged Identity Management panel.

Each role supported by PIM accepts users that are either Permanent or Eligible members of the role. For instance, in the image below, you can see that users assigned the Global Administrator role have Permanent permission, others Eligible.

Eligible and Permanent Azure Global Administrators (Image Credit: Microsoft)

Eligible and Permanent Azure Global Administrators (Image Credit: Microsoft)

Roles have activation settings that determine the maximum amount of time for activation, how admins are notified of the activation, whether users requesting activation should provide any additional information such as a request ticket ID, and whether multifactor authentication is required.

Sponsored

Role Activity

Administrators can view activity on the Audit history panel, where changes in privileged role assignment and activation history are recorded. Alternatively, access reviews can be configured, which requires an assigned person to review historical access data and determine who still requires privileged access.

Azure AD Audit history (Image Credit: Russell Smith)

Azure AD Audit history (Image Credit: Russell Smith)

In this article, I explained what AAD Privileged Identity Management is and how it can be used to improve the security of your AAD tenant.

The post What Is Azure AD Privileged Identity Management? appeared first on Petri.

Creating multi-tenant applications in Microsoft Azure

The content below is taken from the original (Creating multi-tenant applications in Microsoft Azure), to continue reading please visit the site. Remember to respect the Author & Copyright.

Multi-tenancy is one of the founding principles of cloud computing. To reach an economy of scale that allows every cloud user to scale as needed without paying for or suffering from overprovisioned resources, cloud infrastructure must be oversized for a single user and sized for a pool of potential users that shares the same group of resources during a certain period of time. The cloud allows you to reserve resource instances for a tenant and deploy a group of tenants on the same resources. This is a new way of handling app deployment. In this post, we will show you how to develop multi-tenant applications in Microsoft Azure

What is a tenant?

A tenant is a private space for a user or a group of users inside an application. A typical way to identify a tenant is using a domain name. If multiple users share a domain name, we say that users live inside the same tenant. If a group uses a reserved domain name that is different from other users, they live in a reserved tenant. We can say that different names identify different tenants. Different domain names can imply different app instances, but that doesn’t tell us anything about deployed resources.

Multi-tenancy is a pattern. Legacy on-premise applications tend to be a single tenant app shared between users. Because of the lack of specific DevOps tasks, provisioning an app for every user can be a costly operation.

Cloud environments allow you to reserve a single tenant for each user (or group of users) to enforce better security policies and to customize a tenant for a specific purpose, as all DevOps tasks can be automated via management APIs.

Creating multi-tenant applications in Microsoft Azure: Scenario

In our scenario, CloudMaker.xyz, a cloud-based development company, has decided to develop a personal accounting web application for individuals and small companies. In this case, the single customer represents the tenant; different companies use different tenants.

Each tenant needs its own private data to enforce data security, so we will reserve a dedicated database for a single tenant. Access to a single database is not an intensive task, as invoice registration will happen in general once a day. Each tenant will also have its own domain name to enforce the identity of each entity.

A new tenant can be created from the company portal application, where new customers register themselves by specifying the tenant name. For the purpose of this example, we will use default ASP.NET MVC templates to style and build up apps and focus on tenant topics.

Creating the Tenant App

Tenant app is an invoice recording application. To brand the tenant, we record the tenant name in app settings inside the web.config file:



For simplicity purposes, we “brand” the application showing the tenant name in the main layout file where the Application name is shown:



Application content is represented by an invoices page where we record data with a CRUD process. The entry for the invoices is in the navigation bar:



At first, we need to define a model for application in the model folder. Because we need to store data in an Azure SQL database, we can use EntityFramework to create the model from an empty Code First:



As we can see, data will be accessed from a SQL database referenced by a connection string in the web.config file:



The model class is just for demo purposes:



After we try to compile the project to verify that we have not made any mistakes, we can now scaffold this model into a Model-View-Controller (MVC) to have a simple but working app skeleton:

Creating the Portal App

Now, we need to create the portal app starting from the MVC default template. Its registration workflow is useful for making our tenant registration. In particular, we will use the user registration as the tenant registration. We will need to acquire the tenant name and trigger tenant deployment. We need to make two changes to the UI.

First, in the RegisterViewModel, defined under the Models folder, AccountViewModels.cs file, we add a TenantName property:



In the Register.cshtml view page, under Views\Account folder, we add an input box:



Portal applications can be great for allowing the tenant owner to manage its own tenant, configuring or handling subscription-related tasks to the supplier company.

Deploying Portal Application

Before tenant deployment, we need to deploy the portal itself.

MyAccountant is a complex solution made up of multiple Azure services that must be deployed together. First, we will need to create an Azure Resource Group to collect all the services:

All of the data from different tenants, including the portal itself, need to be contained inside distinct Azure SQL databases. Every user will each have its own database. As a personal service used infrequently it can be a waste of money assigning a reserved quantity of Database Transaction Units (DTUs) to a single database. We can invest in a pool of DTUs to be shared among all SQL database instances.

We can start by creating an SQL server service from the portal:

We need to create a pool of database resources (DTU) shared among databases:

We need to configure the pricing tier that defines the maximum resources allocated per database:

The first database that we need to manually deploy is the portal database, where a user will register the tenant. From the MyAccountantPool blade, we can create a new database that will be immediately associated to the pool:

From the database blade, we can see the connection:

We will use that connection string to configure the portal app in web.config:



We need to create the shared resource for the web. In this case, we need to create an App Service Plan where we’ll host portal and tenant apps. The initial size is not a problem: we can decide to scale the solution up or out at any time (in this case only when the application is able to scale out – we don’t handle this scenario here).

Next, we need to create the portal web app that will be associated to the Service Plan that we have just created:

The portal can be deployed from Visual Studio to the Azure subscription by right-clicking on the project root in the Solution Explorer and selecting Publish | Microsoft Azure Web App:

After deployment, the portal is up and running:

Deploy the Tenant App

After tenant registration from the portal, we need to deploy the tenant itself, which is made up of:

  • The app itself that is considered as the artifact to be deployed
  • A Web App that runs the app, hosted on the already defined Web App plan
  • The Azure SQL database the contains data inside the Elastic Pool
  • The connection string that connects the database to the WebApp in the web.config file

This is a complex activity as it involves many different resources, and different kinds of tasks, from deployment to configuration. For this reason, in Visual Studio we have the Azure Resource Group project where we can configure Web App deployment and configuration, via Azure Resource Manager templates. The project will be called Tenant.Deploy and we will choose a Blank template to do this:

In the azuredeploy.json file, we can create a template like this:



The template is quite complex. Remember that on SQL connections, string username and password should be provided inside the template.

We need to reference Tenant.Web project from the deployment project as we need to deploy tenant artifacts (the project bits):

To support deployment, we need to create an Azure storage account back to the Azure portal:

To understand how it works, we can manually run a deployment directly from Visual Studio by right-clicking the deployment project from the Solution Explorer and selecting Deploy. An initial dialog will appear for deploying a “Sample” tenant:

Here, we can see a connection to the Azure subscription by selecting an existing Resource Group or creating a new one, and the template that describes the deployment composition. The template requires some parameters from the Edit Parameters window:

  • The tenant name
  • The artifact location and SasToken that are automatically added having selected the Azure storage account from the previous dialog

Now, via the included Deploy-AzureResourceGroup.ps1 PowerShell file, Azure resources are deployed: The artifact is copied with the AzCopy.exe command to the Azure storage in the Tenant.Web container as a package.zip file and the Resource Manager can start allocating resources.

We can see that tenant is deployed:

Automate the tenant deployment process

To complete our solution, we need to invoke this deployment process from the portal application during the registration process call in ASP.NET MVC. For the purposes of this post, we will just invoke the execution without defining a production-quality deployment process.

We can make a checklist before proceeding:

  • We already have an Azure Resource Manager template that deploys the “customized” tenant for the user
  • Deployment is made with a PowerShell script in the Visual Studio deployment project
  • A new registered user for our application does not have an Azure account: As service publisher, we must offer a dedicated Azure account, with our credentials, to deploy the new tenants

Azure offers many different ways to interact with an Azure subscription:

For our requirements, that means we can make some considerations in integrating our application:

    • We need to reuse the same ARM template we have defined
    • We can reuse the PowerShell experience, but we would also use our experience as .NET, REST, or other platform developers
    • Authentication is the real discriminator in our solution: The user is not an Azure subscription user and we don’t want to make this a constraint

Interacting with the Azure REST API, the API from which every other solution depends on, requires that all invocations must be authenticated to the Azure Active Directory of the subscription tenant. As we have already mentioned, the user is not a subscription-authenticated user.

So, we need an unattended authentication to our Azure API subscription using a dedicated user for this purpose, encapsulated into a component that is executed by the ASP.NET MVC application in a secure manner, to perform the tenant deployment.

The only environment that offers an out-of-the-box solution for our needs (that allows us to write less code) is the Azure Automation service.

Before proceeding, we start by creating a dedicated user for this purpose so that, for security reason, we can disable a specific user at any time. Please take note:

      • Never use the credentials you used to register the Azure subscription in a production environment!
      • For automation implementation, we need an Azure AD tenant user, so we cannot use Microsoft accounts (Live or Hotmail).

To create the user, we need to go to the classic portal, as Azure Active Directory has no equivalent management UI in the new portal. We need to select the tenant directory, that is the one in the new portal that is visible in the upper right-hand corner:

From the classic portal, go to the Azure Active Directory and select the tenant:

Press Add User and type a new username:

Next, we will go to the administrator management in the setting tab of the portal as we need to define the user as a co-administrator in the subscription that we will use for deployment.

With the temporary password, we need to manually log into http://bit.ly/25kOG2j (open the browser in private mode) with these credentials because we need to change the password, as it is generated as “expired.”

We are now ready to proceed. Back in the new portal, we select a new Azure Automation account:

The first thing we need to do inside the account is to create a credential asset to store the newly created AAD credentials to use inside PowerShell scripts to log in on Azure:

We can now create a Runbook, an automation task that can be expressed in several different ways:

We will choose the second option:

Because we can edit it directly from the portal, we can write a PowerShell script for our purposes. It is an adaptation from the one we used in a standard way in the deployment project inside Visual Studio. The difference is that you can run it inside a Runbook and Azure, and it uses already deployed artifacts that are already in the Azure storage account that we created earlier.

Before proceeding, we need two Ids from our subscription:

      • The subscriptionId
      • The tenantId

The two parameters can be discovered with PowerShell, as we can perform a Login-AzureRmAccount Cmdlet and copy them from the output:

The code is not production quality (it needs some optimization) but for demo purposes:



The script is executable in the Test pane, but for production purposes, it needs to be deployed with the Publish button.

Now, we need to execute this Runbook from the outside ASP.NET MVC Portal that we have already created. We can use WebHooks for this purpose. WebHooks are user-defined HTTP callbacks that are usually triggered by some event, in our case, new tenant registration. Since they use HTTP, they can be integrated into web services without adding new infrastructure.

Runbooks can be directly exposed as a WebHook that provides HTTP endpoints natively without the need to provide one ourselves.

Here are a few things to remember at this stage:

      • WebHooks are public with a shared secret in the URL, so it is “secure” if we don’t share it
      • As a share, it expires, so we need to handle the WebHook update in the service lifecycle
      • As a shared secret, if more users are needed, more WebHooks are needed, as the URL is the only way to recognize who invoked it (again, don’t share WebHooks)
      • Copy the URL at this stage as it is not possible to recover; you will need to delete it and generate a new one
      • Write it directly in the portal web.config app settings


We could set some default parameters if needed, and then we can create it.

To invoke the WebHook, we will use the System.Net.HttpClient to create a Post request, putting in the body a JSON object containing the TenantName:



This code is used to customize the registration process in the AccountController:



The ResponseMessage is again a JSON object that contains a JobId that we can use to programmatically access the executed job. We can check the output of the execution from the portal:

Conclusion

Azure can change the way we write our solutions by giving us a set of new patterns and powerful services to use for development. In particular, we have learned how to:

      • Create multi-tenant apps to ensure confidentiality for users
      • Deploy ASP.NET Web Apps in App Services
      • Provision computing resources with App Services Plans
      • Deploy SQL in Azure SQL Databases
      • Provision computing resources with Elastic Pool
      • Declare a deployment script with Azure Resource Manager and Azure Resource Template with Visual Studio cloud deployment projects
      • Automate ARM PowerShell script execution with Azure Automation and Runbooks

There are a lot of things that we can do with what we have learned:

      • Write better .NET code for multi-tenant apps
      • Authenticate users with Azure Active Directory service
      • Leverage deployment tasks with  Azure Service Bus messaging
      • Create more interaction and feedback during tenant deployment
      • Learn how to customize ARM templates to deploy other Azure storage services like DocumentDb, Azure Storage, and Azure Search
      • Handle more PowerShell for Azure Management tasks

Code can be found on GitHub: http://bit.ly/2naol9a

For a full kick start on Microsoft Azure why not start with our Introduction to Microsoft Azure Course

Marco Parenzan is a Research Lead for Microsoft Azure in Cloud Academy. He has been awarded three times as a Microsoft MVP on Microsoft Azure. He is a speaker in major community events in Italy about Azure and .NET development and he is a community lead for 1nn0va, an official Microsoft community in Pordenone, Italy. He has written a book on Azure in 2016. He loves IoT and retrogaming.

More Posts

Follow Me:
TwitterFacebookLinkedIn

Microsoft’s Flawed Plan to Auto-Generate Office 365 Groups for Managers

The content below is taken from the original (Microsoft’s Flawed Plan to Auto-Generate Office 365 Groups for Managers), to continue reading please visit the site. Remember to respect the Author & Copyright.

Auto-creation of Groups

Coming Soon: Office 365 Groups for Every Manager

Microsoft’s March 16 announcement that they will auto-generate Office 365 Groups for managers came as a surprise to many, mostly because we all missed the roadmap item describing Microsoft’s intention. Perhaps it is unsurprising that we missed the news because the announcement did not appear until very recently. RSS feeds like the Office 365 Roadmap Watch did not pick it up. Even the redoubtable Christophe Fiessinger, on point for Microsoft at the recent Ignite Australia event (February 2017), did not mention auto-generated groups in the “What’s Next” part of his “Get the Latest on Office 365 Groups session.

The notification in the Office 365 Message Center (MC96611) says that these groups will help managers collaborate more effectively with their employees. There’s no sign that managers have asked for better collaboration, but that is another story. Cynics will say that this is yet another example where Microsoft is banging the drum to convince customers to make more use of Office 365 Groups. I like Office 365 Groups, but I am unsure of the wisdom behind this move.

What’s Happening

Beginning on April 13, 2017, Microsoft will automatically generate a private Office 365 Group for every manager who has between 2 and 20 direct reports. You have until then to make sure that your tenant is ready to create these groups or to take steps to prevent automatic creation happening.

Before Office 365 creates a group, the following conditions must be true:

  • The manager must be able to create new Office 365 Groups.
  • An Office 365 group for the manager’s direct reports must not already exist (Microsoft has an algorithm “to try and identify existing direct reports groups”). If an email distribution group exists for the direct reports, it is ignored.
  • The tenant does not disable automatic group creation.

According to the admin guidelines, the automatically-created groups are named “Manager Name>’s direct reports”. For example, “Tony Redmond’s direct reports”. Of course, if you use a “Last Name, First Name” convention for accounts, you will end up with “Redmond, Tony’s direct reports”, which does not look quite so good. The format for group naming differs from language to language. Managers can rename the group after creation because they are the group owner. Their direct reports are the group members, who are auto-subscribed to the group to make it behave like an email distribution group.

On the upside, this seems like a sensible way to provide managers with a convenient platform for their employees to share information, including using Microsoft Teams. However, although the idea is fine in concept, some problems exist that make me believe that many tenants will opt-out of this feature.

The Directory is The Problem

Microsoft makes a big assumption that the reporting relationships recorded in Azure Active Directory are correct and can be depended upon to create the automatic groups. Relatively few organizations pay enough attention to the “ManagedBy” property of user accounts to be sure that this data is dependable enough to generate an org chart.

To prove this point, I asked Cogmotive, an ISV that manages millions of Office 365 mailboxes to generate reports for tenants if they knew how many of the mailboxes in their dataset had a blank ManagedBy property. Some number crunching revealed the answer: only 45% of all mailboxes have a defined reporting relationship. Cogmotive’s data is only one sample of the Office 365 user base, but it does not seem like a solid foundation for automatic group generation.

The problem is that Azure Active Directory is not usually the authoritative source for personnel information. My experience is that most large enterprises depend on HR systems to track managers and employees along with other information that a company never exposes to Office 365, like salary levels.

Inside large organizations, it is widespread practice to have account provisioning processes that take employee data and feed it to applications to ensure that users have access as needed. For instance, a feed to Azure Active Directory might create an Office 365 account and assign a license to a user. That feed could populate properties like ManagedBy along with the user’s phone number, address, and other HR data that you want to see in the GAL.

However, I know of few enterprises that depend on Azure Active Directory to track who works for whom, especially when heterogeneous IT systems are in use. Office 365 tenants often treat Azure Active Directory as a source for user authentication and email address validation.

The Missing Links

PowerShell makes it easy to check how many mailboxes are missing the ManagedBy property in your tenant. Running this snippet might generate some surprise when you discover quite how many mailboxes do not have this information (like the one shown in Figure 1):

[PS] C:\> $Mbx = (Get-Mailbox -RecipientTypeDetails UserMailbox | ? {$_.ManagedBy -eq $Null} )
[PS] C:\> $Mbx.Count

Figure 1: An Azure AD account has a missing reporting relationship (image credit: Tony Redmond)

Many reasons exist why the ManagedBy property is null or inaccurate for a mailbox, including:

  • The company does not populate the property as part of its normal account creation and ongoing management processes.
  • The property existed at one time, but the manager has left the company and the link to the manager’s account disappeared when an administrator removed the manager’s account from Azure Active Directory
  • The user moved jobs, but no one updated ManagedBy to point to the new manager.
  • The manager moved jobs, but no one updated their employees to reflect this fact.
  • No synchronization exists between Azure Active Directory and the company’s HR system.

For whatever reason, if inaccurate data exists in Azure Active Directory, you cannot depend on the membership calculated for the automatically created groups. The net result is that you could end up with a mass of inaccurate and potentially misleading groups cluttering up the tenant. At worst, you could have a situation where the use of these groups leads to sensitive documents ending up with the wrong people.

When You Populate Azure Active Directory

Office 365 places a heavy emphasis on a fully-populated directory. If you populate Azure Active Directory as completely as you can, including reporting relationships, Office 365 can expose that information to users in multiple ways, including people cards, Delve profiles, the Outlook GAL, and Teams (Figure 2). There is real goodness in making this kind of organizational insight available to users, but it does take effort.

Office 365 Teams in Directory

Figure 2: How Teams exposes directory information stored in Azure Active Directory for a user (image credit: Tony Redmond)

Ongoing Care and Feeding Needed

Populating the directory is one matter. Keeping all those reporting relationships and other details updated is quite another. You might have an up-to-date directory now, but will the same state be true in a year’s time? And what happens to the membership of groups as managers change positions, new employees join, or people leave the organization?

Other questions are likely to occur as Microsoft rolls out these groups. For example, what happens if you rename an automatically-created group – will Office 365 then go ahead and create another group for the manager? (Apparently not.) Microsoft has clarified that if Office 365 creates an automatic group and the owner removes it, the group will not come back from the dead. In other words, no group zombies here!

Or what happens if a distribution group with the same name exists (cue GAL confusion)? How quickly does Office 365 create a group when someone gains two direct reports?

Microsoft’s says that the only automatic processing is when Office 365 creates the groups. After that, it is up to managers (the group owners) to keep group membership updated. This is a sensible approach. Any attempt to automatically update group membership based on changing reporting relationships recorded in Azure Active Directory would probably become a nightmare. In addition, managers often want to add extra people to their distribution lists, including those who are dotted-line reports. Indeed, some direct reports might not even have Office 365 accounts.

Although they manage group memberships through Azure Active Directory queries, Dynamic Office 365 Groups are not an answer to the maintenance issue. Apart from being dependent on directory data, these groups come with a cost as Microsoft licensing requirements mean that any user that comes within the scope of a query used for a dynamic group must have a premium license for Azure Active Directory.

Stopping Automatic Group Creation

Because most tenants cannot depend on the reporting information held in Azure Active Directory, I think that many tenant administrators will take a hard look at the prospect of many new groups appearing in their GAL and decide that automatic creation is a step too far. Which leads us to the question of how to stop automatic group generation.

You can control the automatic generation of these groups through the DirectReportsGroupAutoCreationEnabled setting in the tenant configuration for Exchange Online. By default, the setting is True, meaning that automatic group generation can proceed. To block creation, change the setting to False.

[PS] C:\> Set-OrganizationConfig -DirectReportsGroupAutoCreationEnabled $False

Microsoft is updating tenant configurations to prepare for this change. You can check your tenant configuration with the Get-OrganizationConfig cmdlet. If the DirectReportsGroupAutoCreationEnabled is present, you can stop automatic groups being created on April 13 by running the command shown above,

You can also implement an AAD policy for Groups and restrict the ability to create groups to a specific set of users. If managers are not included in the allowed list, Office 365 will not create the automatic groups. The downside of this approach is that it stops managers creating new groups, teams, and plans that you might, in fact, want to be created.

Finally, you can decide that Azure Active Directory is just a source of authentication and email addresses and do not populate the ManagedBy property for mailboxes. If this data does not exist, Office 365 will not auto-generate groups.

Cleaning Up Unwanted Groups

Once you clamp down on automatic group creation, you might want to clean up the groups that were created. Fortunately, there is an easy check because these groups are stamped with a specific property. We can find the groups as follows:

[PS] C:\> Get-UnifiedGroup | ? {$_.GroupPersonification -eq "Groupsona:AutoDirectReports" } | Format-Table DisplayName, ManagedBy

To remove the offending groups, you can change the command to add the Remove-UnifiedGroup cmdlet:

[PS] C:\> Get-UnifiedGroup | ? {$_.GroupPersonification -eq " Groupsona:AutoDirectReports" } | Remove-UnifiedGroup -Confirm:$False

Another way to deal with the problem is to leave the auto-created groups in place and then use techniques like those described in this article to remove the groups that no one uses.

Sponsored

The Good and Bad in Auto-Generated Office 365 Groups

Some goodness exists in how Microsoft wants to generate Office 365 Groups for managers, especially for small tenants that might not even realize that they can use groups as the basis for collaboration. Any organization that uses Azure Active Directory as an authoritative directory for employee-manager reporting relationships will also find value in the feature. And if you think automatic group generation is sensible, you now have fair warning to review and update Azure Active Directory to prepare for Office 365 to create the groups.

However, I can imagine problems for enterprise tenants where HR systems are the source for email distribution groups used for organizational communication. With this plan in place, you could end up with two sets of groups. One (from HR) uses correct membership information because HR knows who works for whom. The other (generated by Microsoft) is based on an aspiration that administrators populate Azure Active Directory with links between managers and employees. In my heart, I know full and accurate population of reporting relationships is an unlikely scenario for many companies.

The folks who sell Azure Active Directory monitoring and reporting tools like Hyperfish (who offer a free analysis of your directory) and Cogmotive will welcome this new feature with open arms. I am not sure that those charged with the administration of large tenants will be quite so happy. This should have been an opt-in feature instead of something forced by Microsoft on tenants.

So much for Microsoft’s statement in the Office 365 Trust Center that “you are the owner of your customer data.” If this were really the case, don’t you think Microsoft would ask before they use tenant data to create objects in tenant directories that tenants have not asked for?

Follow Tony on Twitter @12Knocksinna.

Want to know more about how to manage Office 365? Find what you need to know in “Office 365 for IT Pros”, the most comprehensive eBook covering all aspects of Office 365. Available in PDF and EPUB formats (suitable for iBooks) or for Amazon Kindle

The post Microsoft’s Flawed Plan to Auto-Generate Office 365 Groups for Managers appeared first on Petri.

Devon police will establish the UK’s first 24/7 drone squad

The content below is taken from the original (Devon police will establish the UK’s first 24/7 drone squad), to continue reading please visit the site. Remember to respect the Author & Copyright.

The Devon and Cornwall Police force is to become the first in the UK with a permanent, 24-hour drone assistance unit. The flying fuzz will be on hand to search for missing persons, seek out suspects and generally provide an eye in the sky whenever needed, gathering intel at crime scenes and responding to road accidents. The dedicated unit, which will also help out police in neighbouring Dorset, is set to launch this summer after a new "drone manager" is hired to oversee the nine sites the coppercopters will operate out of.

Drones are being trialled by police forces across the UK, and London rozzers have previously said they could make high-speed pursuits safer, particularly where motorbikes are involved. Devon and Cornwall Police is the first constabulary to commit to a permanent unit after testing DJI Inspire 1 drones in the field for the past two years.

Via: Gizmodo, The Mirror

Source: Devon and Cornwall Police

Neverware’s Chrome OS for old computers now includes Office 365

The content below is taken from the original (Neverware’s Chrome OS for old computers now includes Office 365), to continue reading please visit the site. Remember to respect the Author & Copyright.

Neverware has made a name for itself with its CloudReady software, which essentially transforms any old PC or Mac into a Chromebook. But while that’s a nice way to breathe new life into aging computers, it’s naturally reliant on Google’s online services. Now, the company is offering a new version of Cloud Ready for schools that integrates Microsoft’s Office 365 online suite instead. It might seem blasphemous, but it could be useful for schools and other organizations that are already deeply integrated with Microsoft’s software.

While it’s still basically just Chrome OS, the new version of CloudReady will sport integration with OneDrive instead of Google Drive. And similarly, it’ll point you to the online versions of Word, Excel, Powerpoint and other Microsoft software. There’s nothing stopping you from using the online Office 365 apps with the original version of CloudReady, but the deeper integration could make it a bit easier to use for students, teachers and administrators.

Another plus? Neverware’s Office 365 version of CloudReady will cost just $1 per student every year (or $15 per device annually). That’ll make it very useful for cash-strapped school districts. Neverware worked together with Microsoft to develop the new version of its OS, which should allay IT department fears about relying on a young software company.

Source: Neverware

The world’s leading privacy pros talk GDPR with El Reg

The content below is taken from the original (The world’s leading privacy pros talk GDPR with El Reg), to continue reading please visit the site. Remember to respect the Author & Copyright.

Interview You know, we know, everyone knows… the EU’s General Data Protection Regulation goes into effect May of next year for every member of the European Union, and that will include the United Kingdom.

Of course, the UK will eventually leave the EU and what happens then will be a very interesting question according to Trevor Hughes, the president and CEO of the International Association of Privacy Professionals, and Omer Tene, IAPP’s veep of research.

Speaking to The Register at IAPP’s Europe Data Protection Intensive 2017 in London, Hughes acknowledged that “the expectation from all involved, and what has been said by the Information Commissioner at this point, is that after [Brexit] the UK is going to need a GDPR mirror bill.”

As a regulation [PDF] rather than a directive, come May 2018, GDPR will become the law of the land in the UK, but once Blighty departs from the EU’s jurisdiction, we will need “a piece of legislation that mirrors GDPR carefully, so as to leverage the fact that GDPR was already put in place, and also allows for the greatest amount of harmonisation between European data trading partners and the UK.”

Making sure the UK meets the EU’s adequacy agreements is common sense, said Hughes, as the market will have already had to adapt to GDPR and will have made investments in doing so. “It would be jarring to then try to install another policy framework after that.”

The dominating feature of coverage of GDPR has been its provisions for sanctions – allowing the data police to issue fines of up to 4 per cent of global turnover. Tene said: “Just having those sanctions and the toolbox is a game changer. Actually, they could just keep the data protection directive, add the sanctions, and it would have significant impact.”

Yet, he noted, “GDPR is a very detailed document and it adds a lot of language besides the sanctions. The ones that are going to be very interesting to follow are the new rights, the right to be forgotten, the right to erasure, and data portability. The research we’ve done at the IAPP – and are actually releasing a report today about UK implementation of Brexit – but you can also see there the challenges are presenting as the top two are not consent or data protection impact assessments, because that has already been part of the framework under the directive, it’s those two new rights. I’m intrigued to see how they play out, and certainly the big sanctions will be front and centre.”

That recent survey by the IAPP on how privacy professionals in the UK were preparing for GDPR considering Brexit [PDF] found almost half (47 per cent) were investing in new technology to help them manage the data they were processing. The biggest compliance issues for UK privacy professionals were GDPR’s compliance requirements on the right to be forgotten, data portability, understanding research allowances and gathering explicit consent.

The survey concluded that British “privacy professionals are clearly betting that GDPR compliance will meet almost any new standard the UK may adopt post-Brexit”, and Hughes explained why those standards were so important.

Data Data Revolución!

“We are at the nascent moment of the digital economy, the digital revolution,” said Hughes.

Privacy and data protection are the largest societal issues that we have found. We can ill afford to have an experience like the industrial revolution, where environmental concerns were not identified, not addressed, and really not even handled from a legislative public policy and operational perspective until about 150 years after the industrial revolution had begun.

The environmental movement in the middle of the twentieth century … we can not afford the digital revolution, the information revolution, to wait that long, and so investment needs to occur now, we need to pay attention to these issues now and that’s not because we want to try and build a compliance industry, rather it’s because we want to extract the massive amount of value that the information economy presents to us.

We want to gain the most that we possibly can from that economy and in order to extract that value doing it in a way that is data protection and privacy sensitive, that actually allows that market to move faster, to move in a way that’s safer and more scalable.

A lack of foresight has threatened much interruption to the industry. For many years and despite much criticism, the European Commission stood by its claim that the US legal principles complied with those of its own Data Protection Directive, even doing so after a US National Security Agency (NSA) whistleblower provided documentary evidence to the contrary.

Thus, when rogue US sysadmin Edward Snowden made the activities of the NSA’s PRISM programme (Planning tool for Resource Integration, Synchronization, and Management) known, it actually fell to Austrian lawyer Max Schrems to make a legal complaint about Facebook facilitating these extralegal abuses (at least under the EU’s definitions of legality).

The European Court of Justice ultimately conceded that Safe Harbor was indeed invalid, and suddenly there was no legal basis for American megacorps to continue quaffing Europeans’ data. Not that those companies cared, or agreed even. Facebook, Microsoft, and Salesforce have continued to shuttle Zuckabytes back home through “model clauses” contracts, a measure which is again being challenged by Schrems.

Even if this workaround is shot down during the ongoing court case in Dublin, however, both the EU and US share much about privacy in terms of cultural values regarding privacy, suggested Hughes.

Privacy is a cultural value and will necessarily differ between jurisdictions, said Hughes, “that conflict of laws … has existed for as long as laws have existed,” he added. “What’s challenging now is that the global information economy ignores those jurisdictions and there really is very little recognition of national boundaries as data flies around the world so incredible quickly.”

“This significant friction that we see between Europe and the United States … with regards to data transfers, that is indicative of a dynamic that will exist in many jurisdictions between many jurisdictions, in many ways, forever,” Hughes said.

I think that we see some significant concerns in the years ahead with regards to European adequacy in data transfers to the United States. The case currently going through in Dublin, certainly portends trouble ahead and the first Schrems case that went through the Safe Harbor case, if that’s any indication I think that we will continue to see challenges to those data transfer mechanisms.

We will continue to see criticism of US data practices, particularly around intelligence community gathering of data in the private and public sectors, we’ll continue to see those things. At the same time however, the massive value and utility of those data flows between Europe and the United States at some point needs to become part of that policy consideration. At some point those jurisdictions are going to step back and say, “We’re part of the information economy now, and the data transfers between Europe and the United States are so incredibly important we simply cannot abide by not allowing these data transfers to occur.”

Safe Harbor and Privacy Shield were addressed by the private sector, suggested Omer Tene, IAPP’s veep of research, and were not intended to address the government surveillance issue. He noted the “biggest economies in the world, the US and China, don’t have an adequacy ruling, and yet this isn’t going to stop – and I don’t think even significantly impact – data flows to these jurisdictions.”

“There needs to be a clear and transparent assessment of how we want intelligence agencies to act with regards to data, data of citizens of that country, data of non-citizens of that country, data within that country, data without country,” said Hughes.

“Intelligence agencies in the US and Europe need to be transparently assessed on how their data practices are occurring,” he continued. “The battleground for that has been the media with Snowden, and also things like the Schrems cases. I’m not sure if a single court assessing a challenge to model contract provisions is the right place for us to have that full policy argument.” ®

The future of networking: It’s in a white box

The content below is taken from the original (The future of networking: It’s in a white box), to continue reading please visit the site. Remember to respect the Author & Copyright.

Whether it’s food, beverages or automobiles, I want options and don’t want to be told what to do. I feel the same way about networking equipment. 

I’ve resented the fact that select vendors have had too much control in dictating choices over the years. I don’t think users should be told what, when and how they should buy, deploy and upgrade their network equipment. Luckily, those days are numbered thanks in part to the good work of the Open Compute Project, whose mission is to design and enable the delivery of the most efficient server, storage and data center hardware designs for scalable computing. 

The future of networking is fast approaching—and it’s in a white box. White box switching isn’t new, but until recently, it came with a heavy cost: You had to have an army (or at least a platoon) of techies, especially those who love Linux, to administer and manage the gear. That’s why much of the first wave of backers came from the ranks of Amazon, Facebook and Google. Simply put, they have deep Linux bench strength and proven experience. 

A couple of years ago, the common thinking was while you could save money by opting for white box’s generic hardware over OEMs’, the savings were blown on operational costs. This is not so true anymore as hands-on Linux expertise becomes more prevalent. Equally encouraging is seeing big names such as Dell and HP entering the mix. Their boxes even come with ready-to-run network operating systems from companies such as Cumulus Networks and Pica8, removing a huge concern. 

White boxes can hold their own

As for what’s inside, the guts of most white boxes come from the same places Cisco, et al, get their components. Why they cost more is their software and fancy logo on the box. White boxes have shown they can hold their own in software-defined networking (SDN) deployments and supporting industry standards. 

OEMs defend their shaky ground by saying white boxes lack functionality, but they need to face the facts: Many of the newest and fanciest bells and whistles do more to justify inflated price tags than drive major performance improvements. Typically, all the latest and greatest features aren’t used that much. So, perhaps the biggest sacrifice is understanding your white box is more a Prius than a Tesla. 

Technical support for white boxes also is improving every day. The players are big pushers of open computing standards, so users are given choices and control over their hardware, as well as the software that will run on it. White boxes promise high levels of freedom for networking. 

Topping my list of other things I like about white boxes is how adoption helps avoid vendor lock-in and OEM shipping delays. Ever been hung out long after your expected delivery date for your new Cisco switch? White boxes are plentiful and come from a growing variety of sources, thus reducing potential delivery delays. This also makes a difference when you suddenly need another box or want to take advantage of white boxes as an excellent self-sparing option. With OCP-compliant white boxes, you’re no longer at the mercy of OEMs’ sometimes-slow or incomplete fixes to software bugs. 

There’s more good news. During the OCP’s recent U.S. Summit, the organization debuted an online directory of open source-design products. The aim: to help small users find products the monoliths already know about and use in order to speed overall market acceptance. 

To start your own white box initiative, you don’t have to go all-in, like back in the day with your Cisco network. Instead, it’s advisable to play the field a little bit to determine how white box deployments can be phased in to optimize early successes. Remember, you have choices and control. Don’t make a huge commitment until both are in your favor.

And even if you are not ready this instant, don’t let your OEM know. Instead, whisper (or perhaps shout) “white box” in your next conversation. Perhaps you’ll be rewarded by getting a better discount on your next branded box.

Join the Network World communities on

Facebook

and

LinkedIn

to comment on topics that are top of mind.

Are You Being Held Hostage by Your Legacy Email Archives? The Top Three Threats and How to Escape Them.

The content below is taken from the original (Are You Being Held Hostage by Your Legacy Email Archives? The Top Three Threats and How to Escape Them.), to continue reading please visit the site. Remember to respect the Author & Copyright.




Interview 

Today, countless IT organizations are being faced with challenges regarding the management of legacy email archives.  Of course, some of these archives simply grew like weeds over time, while some unfortunate IT professionals inherited the legacy when they joined their current employer.  However they arrived in their current situation, many are now feeling trapped and looking for escape – knowing that they not only face management and cost issues, but other threats as well.  We are exploring this topic today with Archive360‘s Bill Tolson.

VMblog:  To begin, how do you/your customers define a legacy email archive?

Bill Tolson:  By legacy, Archive360 and our customers are meaning email archives that were designed and deployed in the early 2000’s – most of the time they are hosted on-site, but sometimes we also find cases of hosted email archive solutions, the earliest examples of “cloud.”  Unfortunately, as many have learned the hard way, these legacy email archives present very real threats. 

VMblog:  Which threat do you feel is most prevalent?

Tolson:  Topping my list of threats is: support.  It is a fact of business that vendors and their IP, or products, are bought and sold.  Most legacy email archive products have passed through several owners.  Each owner may have kept the product “as is” or added a bit of their own secret sauce to the recipe.  Of course, there can be benefits to new ownership, but there can also be many disadvantages. 

To start, what was the reason for the ownership change?  Was the acquirer looking specifically for an archive product, or was the archive product simply an element of a larger purchase?  Was the purchase in an effort to gain market share, and/or eliminate a competitor?  As is often the case, the new owner may have very little (if any) interest in the archive product and therefore will not/does not make any investment in the product’s continued development. 

With the archive product under new ownership, assuming the product isn’t end-of-lifed (EOL) – meaning all support will shortly be eliminated, the next question is: How good will the support be from the new owner?  If you are responsible for your organization’s archiving, you know that the first change likely to happen is a change in support structure, accompanied by an increase in support fees. 

Bottom line, today almost all legacy email archive systems have either zero support available, and have not enjoyed a bug-fix in years (not to mention, isn’t likely to work with any of today’s technology), or the support is increasingly poor and increasingly expensive. 

VMblog:  What do you view as the second most common threat?

Tolson:  The second issue we commonly come across is that of: security.  Legacy email archives were designed and deployed on legacy hardware and software.  Two very common examples are Microsoft Windows Server 2003 and Windows SQL Server.  At one time, both products were the mainstay of virtually every email archiving solution.  Legacy email archives that are still running on these EOL products present very real security risks to your business.  Microsoft is aware and has proactively communicated to the community regarding these risks.  Here are links to a few examples of those communications:

In reality, finding archives still running on legacy hardware/software is extremely common.  In fact, Archive360 just completed a legacy email archive migration for a customer whose archive was running on SQL Server 2000.  SQL Server 2000 has been end-of-life since April 2013.

VMblog:  And, last but not least – what do you view as the third most common threat?

Tolson:  These are litigious times, so the third threat on our list is: legal.  In today’s business climate, lawsuits – frivolous or otherwise, have become routine.  In particular, lawsuits from former employees are on the rise.  The number one source of evidence for such law suits is email.  Legacy email archives can potentially contain years of old email. 

VMblog:  So, what advice would you give?

Tolson:  From a legal and regulations compliance standpoint, my advice to clients is to take the time to perform a search and take inventory of your legacy email archive.  How many years of email does it contain?  How is it being protected?  How accessible is it?  Now approach Executive Management, your General Counsel and/or Regulations Officer, inform them, and ask him/her what the company’s policy and responsibilities are for email archive management.  Email retention/management may be governed by an industry regulation; and this is the responsibility of your General Counsel and/or Regulations Officer to decipher and advise.  For instance, there may be specific guidelines around how long emails must be retained.  And, while regulations/laws must always be followed, email that is kept past its useful and/or necessary timeline could create an unnecessary legal risk for your organization.  Likewise, it is prudent to keep other email as an historical timeline, to protect trademarks and/or IP, etc.  Again, working closely with corporate management, legal and your regulations teams is critical.  Once guidance is provided, enforce it immediately.  Many times, existing technology is such that adherence to internal governance and/or external regulations is impossible.  In these cases, migrating to modern technology is smart from a management, cost, legal and regulations compliance standpoint.

VMblog:  Any last thoughts you care to share with readers?

Tolson:  If you are sitting on a legacy email archiving and putting off the decision to make a change – I hope that this discussion has gotten your attention.  Updating your email archive can be a completely painless process, if you partner with the right organization(s).  And, in doing so you will improve management, security, availability, scalability, protection and lower costs – while eliminating the aforementioned threats.  You shouldn’t feel as if you are being held hostage by your legacy email archive.  Thousands of organizations have successfully moved to new email archive platforms, so there really is a light at the end of the tunnel.

##

About Bill Tolson, Vice President of Marketing, Archive360

Bill Tolson has more than 25 years of experience with multinational corporations and technology start-ups, including 15-plus years in the archiving, information governance, regulations compliance and legal eDiscovery markets. Prior to joining Archive360, Bill held leadership positions at Actiance, Recommind, Hewlett Packard, Iron Mountain, Mimosa Systems, and StorageTek.  Bill is a much sought and frequent speaker at legal, regulatory compliance and information governance industry events and has authored numerous articles and blogs. Bill is the author of two eBooks: “The Know IT All’s Guide to eDiscovery” and “The Bartenders Guide to eDiscovery.” He is also the author of the book “Cloud Archiving for Dummies” and co-author of the book “Email Archiving for Dummies.” Bill holds a Bachelor of Science degree in Business Management from California State University Dominguez Hills.

Great collection of PDF scans of old electronic/audio technical books and manuals from 30s/40s/50s/60s. [x/post r/diysound]

The content below is taken from the original (Great collection of PDF scans of old electronic/audio technical books and manuals from 30s/40s/50s/60s. [x/post r/diysound]), to continue reading please visit the site. Remember to respect the Author & Copyright.

http://bit.ly/2mH9Jul

ccenhancer (4.4.2)

The content below is taken from the original (ccenhancer (4.4.2)), to continue reading please visit the site. Remember to respect the Author & Copyright.

A small tool which adds support for over 1000 new programs into the popular program CCleaner.

Micro Wind Turbine For Hikers

The content below is taken from the original (Micro Wind Turbine For Hikers), to continue reading please visit the site. Remember to respect the Author & Copyright.

[Nils Ferber] is a product designer from Germany. His portfolio includes everything from kitchen appliances to backpacks. One project, though, has generated a bit of attention. It’s a micro wind turbine aimed at long distance hikers.

Even on the trail, electronics have become a necessity. From GPS units to satellite phones, to ebook readers. Carrying extra batteries means more pack weight, so many hikers utilize solar panels. The problem is that when the sun is up, hikers are on the move – not very conducive to deploying a solar array. The Wind, however, blows all through the night.

[Nils] used carbon fiber tube, ripstop nylon, and techniques more often found in kite building to create his device. The turbine starts as a small cylindrical pack. Deploying it takes only a few minutes of opening panels and rigging guy wires. Once deployed, the turbine is ready to go.

While this is just a prototype, [Nils] claims it generates 5 Watts at a wind speed of 18 km/h, which can be used to charge internal batteries, or sent directly to any USB device. That seems a bit low for such a stiff wind, but again, this is just a prototype. Could you do better? Tell us in the comments! If you’re looking for a DIY wind generator on a slightly larger scale, you could just build one from bike parts.

OpenStack Developer Mailing List Digest March 11-17

The content below is taken from the original (OpenStack Developer Mailing List Digest March 11-17), to continue reading please visit the site. Remember to respect the Author & Copyright.

SuccessBot Says

  • Dims [1]: Nova now has a python35 based CI job in check queue running Tempest tests (everything running on py35)
  • jaypipes [2]: Finally got a good functional test created that stresses the Ironic and Nova integration and migration from Newton to Ocata.
  • Lbragstad [3]: the OpenStack-Ansible project has a test environment that automates rolling upgrade performance testing
  • annegentle [4]: Craig Sterrett and the App Dev Enablement WG: New links to more content for the appdev docs [5]
  • jlvillal [6]: Ironic team completed the multi-node grenade CI job
  • Tell us yours via OpenStack IRC channels with message “#success <message>”
  • All: [7]

Pike Release Management Communication

  • The release liaison is responsible for:
    • Coordinating with the release management team.
    • Validating your team release team requests.
    • Ensure release cycle deadlines are met.
    • It’s encouraged to nominate a release liaison. Otherwise this tasks falls back to the PTL.
  • Ensure the releaase liaison has time and ability to handle the communication necessary.
    • Failing to follow through on a needed process step may block you from meeting deadlines or releasing as our milestones are date-based, not feature-based.
  • Three primary communication tools:
    • Email for announcements and asynchronous communication
      • “[release]” topic tag on the openstack-dev mailing list.
      • This includes the weekly release countdown emails with details on focus, tasks, and upcoming dates.
    • IRC for time sensitive interactions
      • With more than 50 teams, the release team relies on your presence in the freenode #openstack-release channel.
    • Written documentation for relatively stable information
      • The release team has published the schedule for the Pike cycle [8]
      • You can add the schedule to your own calendar [9]
  • Things to do right now:
    • Update your release liaisons [10].
    • Make sure your IRC and email address listed in projects.yaml [11].
  • Update your mail filters to look for “[release]” in the subject line.
  • Full thread [12]

OpenStack Summit Boston Schedule Now Live!

  • Main conference schedule [13]
  • Register now [14]
  • Hotel discount rates for attendees [15]
  • Stackcity party [16]
  • Take the certified OpenStack Administrator exam [17]
  • City guide of restaurants and must see sites [18]
  • Full thread [19]

Some Information About the Forum at the Summit in Boston

  • “Forum” proper
    • 3 medium sized fishbowl rooms for cross-community discussions.
    • Selected and scheduled by a committee formed of TC and UC members, facilitated by the Foundation staff members.
    • Brainstorming for topics [20]
  • “On-boarding” rooms
    • Two rooms setup classroom style for projects teams and workgroups who want to on-board new team members.
    • Examples include providing introduction to your codebase for prospective new contributors.
    • These should not be tradiitonal “project intro” talks.
  • Free hacking/meetup spaces
    • Four to five rooms populated with roundtables for ad-hoc discussions and hacking.
  • Full thread [21]

 

The Future of the App Catalog

  • Created early 2015 as a market place of pre-packaged applications [22] that you can deploy using Murano.
  • This has grown to 45 Glance images, 13 Heat templates and 6 Tosca templates. Otherwise did not pick up a lot of steam.
  • ~30% are just thin wrappers around Docker containers.
  • Traffic stats show 100 visits per week, 75% of which only read the index page.
  • In parallel, Docker developed a pretty successful containerized application marketplace (Docker Hub) with hundreds or thousands regularly updated apps.
    • Keeping the catalog around makes us look like we are unsuccessfully trying to compete with that ecosystem, while OpenStack is in fact complimentary.
  • In the past, we have retired projects that were dead upstream.
    • The app catalog is however has an active maintenance team.
    • If we retire the app catalog, it would not be a reflection on that team performance, but that the beta was arguably not successful in build an active market place and a great fit from a strategy perspective.
  • Two approaches for users today to deploy docker apps in OpenStack:
    • Container-native approach using “docker run” after using Nova or K8s cluster using Magnum.
    • OpenStack Native approach “zun create nginx”.
  • Full thread [23][24]

ZooKeeper vs etcd for Tooz/DLM

  • Devstack defaults to ZooKeeper and is opinionated about it.
  • Lots of container related projects are using etcd [25], so do we need to avoid both ZooKeeper and etcd?
  • For things like databases and message queues, it’s more than time for us to contract on one solution.
    • For DLMs ZooKeepers gives us mature/ featureful angle. Etcd covers the Kubernetes cooperation / non-java angle.
  • OpenStack interacts with DLM’s via the library Tooz. Tooz today only supports etcd v2, but v3 is planned which would support GRPC.
  • The OpenStack gate will begin to default to etcd with Tooz.
  • Full thread [26]

Small Steps for Go

  • An etherpad [27] has been started to begin tackling the new language requirements [28] for Go.
  • An golang-commons repository exists [29]
  • Gopher cloud versus having a golang-client project is being discussed in the etherpad. Regardless we need support for os-client-config.
  • Full thread [30]

POST /api-wg/news

  • Guidelines under review:
    • Add API capabilities discovery guideline [31]
    • Refactor and re-validate API change guidelines [32]
    • Microversions: add next_min_version field in version body [33]
    • WIP: microversion architecture archival doc [34]
  • Full thread [35]

Proposal to Rename Castellan to oslo.keymanager

  • Castellan is a python abstraction to different keymanager solutions such as Barbican. Implementations like Vault could be supported, but currently is not.
  • The rename would emphasize the Castellan is an abstraction layer.
    • Similar to oslo.db supporting MySQL and PostgreSQL.
  • Instead of oslo.keymanager, it can be rolled into the oslo umbrella without a rename. Tooz sets the precedent of this.
  • Full thread [36]

Release Countdown for week R-23 and R-22

  • Focus:
    • Specification approval and implementation for priority features for this cycle.
  • Actions:
    • Teams should research how they can meet the Pike release goals [37][38].
    • Teams that want to change their release model should do so before end of Pike-1 [39].
  • Upcoming Deadlines and Dates
    • Boston Forum topic formal submission period: March 20 – April 2
    • Pike-1 milestone: April 13 (R-20 week)
    • Forum at OpenStack Summit in Boston: May 8-11
  • Full thread [40]

Deployment Working Group

  • Mission: To collaborate on best practices for deploying and configuring OpenStack in production environments.
  • Examples:
    • OpenStack Ansible and Puppet OpenStack have been collaborating on Continuous Integration scenarios but also on Nova upgrades orchestration
    • TripleO and Kolla share the same tool for container builds.
    • TripleO and Fuel share the same Puppet OpenStack modules.
    • OpenStack and Kubernetes are interested in collaborating on configuration management.
    • Most of tools want to collect OpenStack parameters for configuration management in a common fashion.
  • Wiki [41] has been started to document how the group will work together. Also an etherpad [42] for brainstorming.

 

Smackdown: Office 365 vs. G Suite management

The content below is taken from the original (Smackdown: Office 365 vs. G Suite management), to continue reading please visit the site. Remember to respect the Author & Copyright.

When you choose a productivity platform like Microsoft’s Office 365 or Google’s G Suite, the main focus is on the platform’s functionality: Does it do the job you need?

That’s of course critical, but once you choose a platform, you have to manage it. That’s why management capabilities should be part of your evaluation of a productivity and collaboration platform, not only its user-facing functionality.

You’ve come to the right place for that aspect of choosing between Office 365 and Google G Suite.

Admin console UI. Both the Office 365 and G Suite admin consoles are well designed, providing clean separation of management functions and clear settings labels, so you can quickly move to the settings you want and apply them.

AWS Offers Cloud Credits to Alexa Skill Developers

The content below is taken from the original (AWS Offers Cloud Credits to Alexa Skill Developers), to continue reading please visit the site. Remember to respect the Author & Copyright.

Brought to You by Talkin’ Cloud

Amazon Web Services is using its cloud dominance to encourage developers to build Alexa skills, launching a new program on Wednesday that offers cloud credits to developers with a published Alexa skill. Alexa Skills can be thought of as “virtual apps” that help users extend the power of the virtual assistant.

AWS said that many Alexa skill developers use its free tier, which offers a limited amount of Amazon EC2 compute power and AWS Lambda requests for no charge. But if developers go over these limits, they will incur cloud charges.

With its new offering, developers with a published Alexa skill can apply to receive a $100 AWS promotional credit as well as an additional $100 per month in credits if they incur AWS usage charges for their skill.

In November at its partner conference, AWS’ director of its worldwide partner ecosystem Terry Wise said that AWS had heard a “deep desire to integrate Alexa and voice capabilities” into services offered through the partner network, according to ZDNet. To that end, AWS announced the Alexa Service Delivery Program for Partners, which gives “companies access to tools, training, solution blueprints, and support for the assistant, which is best known for helping customers using Amazon’s Echo line of connected speakers,” according to PC World.

As of January, there were more than 7,000 custom Alexa skills built by third-party developers. There are all kinds of things Alexa lets you do – including those Alexa skills that enhance your productivity, locate your missing keys, and make a cup of coffee. [Check out more ways to use Alexa on our sister site, Supersite for Windows.]

“There is already a large community of incredibly engaged developers building skills for Alexa,” Steve Rabuchin, Vice President, Amazon Alexa said in a statement. “Today, we’re excited to announce a new program that will free up developers to create more robust and unique skills that can take advantage of AWS services. We can’t wait to see what developers create for Alexa.”

Alexa developers can apply to see if they qualify for the AWS cloud credits online. The first promotional credits will be sent out in April.

If an Alexa developers’ skill surpasses the free usage tier, they may be eligible to continue receiving promotional credits.

“Each month you have an AWS usage charge you’ll receive another $100 AWS promotional credit to be used toward your skill during the following month,” AWS says.

This article originally appeared on Talkin’ Cloud.

How to find out if Microsoft Services are down or not

The content below is taken from the original (How to find out if Microsoft Services are down or not), to continue reading please visit the site. Remember to respect the Author & Copyright.

Problems can come uninvited, and the Microsoft authentication issue is a case in point. It recently hit several key services of the company like Office 365, Outlook.com, OneDrive, Skype, XBox Live, Microsoft Azure, etc. The services either stopped responding to users request or simply directed users to a blank page, displaying an error message that their account does not exist.

After some delay, the services were restored. What was worth noticing about the services outage was whether the problem was region specific or widespread. For instance, Outlook, the email service was down across Europe and the northeastern United States. XboxLive services also appeared to have been affected in the same regions. Skype, hit in Japan and the US east coast. This, obviously brings our focus to one main question – is there any way to find out if Microsoft services are down or not? Certainly, there is!

You can check or Get Operation Status activity to determine whether the operation requested by that activity has succeeded, failed, or is still in progress.

Check operation status of Microsoft Services

Is Outlook.com, Skype, OneDrive or Xbox Live down? Is there an Azure or Office 365 outage? Check the operation status of Microsoft Services using these links. The Get Operation Status activity is used to get the status of the specified operation (Azure, Office 365, Outlook.com, OneDrive, Skype, XBox Live, etc.).

1] Check Azure status

Check operation status of Microsoft Services

You can get Operation Status of the Azure by visiting its Status page. It presents a region wise report (America, Europe, and Asia Pacific) and displays the status as shown in the screenshot below.

  1. Good
  2. Warning
  3. Error
  4. Information.

For targeted notifications regarding the health of Azure resources and customization of notifications, users can visit the Azure portal.

2] Check Office 365, Skype, OneDrive status

Office 365 contains online and offline versions of Microsoft Office, Skype for Business (previously: Lync) and OneDrive, as well as online versions of Sharepoint, Exchange, and Project. You can check or verify the Office 365 service health status by visiting here.

There, you can check the Office 365 service health status by entering your Microsoft credentials. The page offers information for a default of the last seven days.

3] Is Xbox Live down

Xbox Live is an online multiplayer gaming and digital media delivery platform, available on the Xbox 360 gaming console, Windows PCs and Windows Phone devices. If you are experiencing trouble booting up your digital games and logging into Xbox Live, you can check the online gaming infrastructure by visiting its status page. There you can check the status for services, websites, games and apps listed under Xbox Live Status.

4] Is Outlook.com down

is outlook down

Similarly, the Outlook.com status dashboard indicates whether the service is down for users or not, specifically, those using Windows Mail, Outlook Connector, MSN Premium client Windows Phone and the Windows Mail Client. To check this, visit its portal Service Staus page. Here, you can check the status of the services that are up and running.

Would you like to share your experiences about the recent Microsoft services outage. Or wish to give feedback? Please do so in the comments section below.

Montblanc’s first smartwatch is the luxury Summit

The content below is taken from the original (Montblanc’s first smartwatch is the luxury Summit), to continue reading please visit the site. Remember to respect the Author & Copyright.

Luxury brand Montblanc has already made a few tentative steps into the smart things space. And just as a simpler stylus preceded a fancier note digitizer, Montblanc is now ready to follow up its e-Strap accessory with a fully fledged Android Wear 2.0 smartwatch. It’s called the Summit, and there’s nothing too out of the ordinary as far as components go: A 1.39-inch (400 x 400) AMOLED display sits up front, with a Snapdragon Wear 2100 chip, 512MB of RAM and 4 gigs of storage tucked away behind. Other notable elements include a heart-rate sensor and built-in microphone, but Montblanc is under no illusion it’s pushing the boundaries of technology here. It’s much more concerned with style.

Montblanc says the release of Android Wear 2.0 presented the perfect opportunity for it to launch a smartwatch, since the latest version of Google’s wearable platform plays much nicer with iOS (thus increasing the Summit’s potential customer base). You won’t hear phrases like "wearable platform" from the mouth of any company rep, though. The sales pitch is angled more for the benefit of the fashion and vintage crowds.

The idea was to put Swiss style and the same design language as Montblanc’s 1858 analog timepiece collection into a smartwatch. As you might expect, the Summit includes a selection of exclusive watch faces — primarily digital versions of classic Montblanc designs, as well as the odd new one with stopwatch functionality and the like. Uber, Foursquare and Runtastic are also preinstalled on the wearable (with introductory promotions), for the jet-setting type that likes to stay in shape.

OLYMPUS DIGITAL CAMERA

Naturally, the luxury brand used only the finest stainless steel and the top graded titanium to create the four, 46mm diameter bodies. There are silver and black PVD-coated stainless steel models, a silver titanium model, and a dual-color steel version with silver body and a black watch face bezel with second markings. All versions can be paired with rubber, leather or alligator leather bands.

There’s no denying the impeccable build quality of the Summit, which is water-resistant (IP68 rating), by the way. It shows in the brushed metal with satin finish, the elaborate crown that’s actually a button, the lovingly chamfered edges all around, and the slightly domed sapphire glass that protects the AMOLED display. Despite the workmanship, the watches appear to me to be otherwise exceedingly generic, though someone with much better fashion sense than I might care to disagree. With a maximum height of 12.5mm, they’re also excessively chunky, and heart-rate sensor aside they don’t look like the type of wearable that’s particularly suited to running.

OLYMPUS DIGITAL CAMERA

Like the recently announced TAG Heuer luxury wearable, money can’t buy you more than a day’s battery life from the Summit’s 350mAh unit. And if you hadn’t guessed by now, you’re expected to pay significantly more for Montblanc’s first smartwatch than Samsung or LG’s latest. Pricing for all the stainless steel models starts at $890 if you can live with a plain leather strap, jumping to $930 for a colored rubber strap or hand-painted navy or brown leather band. The coveted alligator strap ups that price by $50 to bring the grand total to $980. This is the starting price for the titanium model, which increases to $1,020 and $1,070, respectively, as the bands get fancier. UK pricing starts at £765 for a steel body with standard black leather strap.

The Summit will initially go on sale online at the beginning of May in the US and UK (exclusively at Mr Porter for the first two weeks), before launching in other parts of Europe, the Middle East, Asia, India, South Africa, Mexico and Australia before the end of July.

Party Bot decides who’s on the guest list, what music to play

The content below is taken from the original (Party Bot decides who’s on the guest list, what music to play), to continue reading please visit the site. Remember to respect the Author & Copyright.

While most people in the tech business only roll into Austin once a year for SXSW, a handful of companies choose to call the city home. Fjord (formerly known as Chaotic Moon) is one of them. So, when the festival sets up around them, it uses the week as an opportunity to show of some of its proof of concept (and usually fun) ideas.

Enter Party Bot: an app that uses AI in real time to make sure any gathering doesn’t get gatecrashed, and has the DJ well-informed on what to play (which is an improvement on a boring Spotify playlist for sure). Party Bot first needs to know what you look like. Peer into an app and move your face around a bit, and it’ll then be able to match your name to your face. You can complete your profile by telling it which types of music you prefer, or those that have you leaving the dance floor.

Once you’re behind the velvet rope and the Cristal is flowing, you can set up cameras/iPads around the party/club/whatever that will spot revelers in the crowd. Knowing who’s on the dance floor, and what music they prefer gives even the worst DJ a chance to ease up on the Taylor Swift, and go heavy on the Motörhead (if that’s what the audience wants). Other applications could be a simple way to grant access to VIP areas, or offer drinks promotions at the bar (or suggest maybe you’ve been at the bar a little too much).

That’s the theory of course. The reality is that Party Bot is just Fjord showing off what it can do. The app did get involved in the company’s own SXSW party, but we missed out on that (we were presumably at a better party, or one without a bot). Instead, we got to try it in a much more sober setting: a pop-trivia quiz in a hotel conference room. Party Bot recognized me, called me by my name, and then hit me with some simple questions. My challenge was to get enough right to win a T-shirt. We kinda cheated, as I didn’t know who first sang "Blue Suede Shoes." (Hint: not Elvis.) But we got given a T-shirt anyway — and uninvited to all future parties, no doubt.

Supercomputer simulation looks inside of 2011’s deadliest tornado

The content below is taken from the original (Supercomputer simulation looks inside of 2011’s deadliest tornado), to continue reading please visit the site. Remember to respect the Author & Copyright.

In may of 2011, a sequence of tornadoes roared across the midwestern United States. The incident became a focal point for scientists eager to learn what it is about supercell storms that allow them to form such devastating tornados. It’s an important field of study, but a challenging one — these storms are so enormous there’s simply too much data for typical methods to work through. So, what’s a atmospheric scientist to do? Use a supercomputer, of course.

Leigh Orf at the University of Wisconsin-Madison had the 2011 storm simulated by the University of Illinois’ Blue Waters machine — tasking the supercomputer with breaking the enormous supercell into almost two billion small chunks spread over a 75 square mile area. The wind speed, temperature, pressure humidity and precipitation of each of those smaller sections was individually calculated before reassembling the bits into one large recreation of the entire storm. The task took three days and 20,000 of Blue Waters’ processing cores, but it was worth it.

"For the first time, we’ve been able to peer into the inner workings of a supercell that produces a tornado," Orf says. "We have the full storm, and we can see everything going on inside of it." This lets his team directly study how these deadly twisters are formed from the inside-out. It also gives us a hauntingly beautiful video of the storms formation to watch.

It’s a research problem that couldn’t have been solved any other way, too — not so much because the weather is complex, Orf says, but because there’s just too much data to be handled any other way. "This type of work needs the world’s strongest computers just because the problem demands it," he told Popular Science. "there’s no way around it."

Source: Popular Science

Tizeti is bringing wireless internet to urban Africa

The content below is taken from the original (Tizeti is bringing wireless internet to urban Africa), to continue reading please visit the site. Remember to respect the Author & Copyright.

Internet accessibility in emerging markets is a huge challenge.

Facebook and Alphabet have spent millions on internet-enabled drones, balloons, and other services to solve the last mile problem of connectivity in rural markets in developing nations, but that solves a problem that affects a smaller population than addressing connectivity in cities.

Meanwhile, a new company called Tizeti, which is graduating from the latest batch of startups to come from Y Combinator, is proposing a simple solution to the connectivity problem… Build more towers, more cheaply, and offer internet services at a cost that makes sense for consumers in the urban environments where most people actually live.

“There’s a ton of capacity going to 16 submarine cables [coming into Africa],” says Tizeti founder Kendall Ananyi. “The problem is getting the internet to the customers. You have balloons and drones and that will work in the rural areas but it’s not effective, in urban environments. We solve the internet problem in a dense area.”

It’s not a radical concept, and it’s one that’s managed to net the company 3,000 subscribers already and nearly $1.2 million in annual recorded revenue, according to Ananyi.

“There are 1.2 billion people in Africa, but only 26% of them are on online and most get internet over mobile phones,” says Ananyi. Perhaps only 6% of that population has an internet subscription, he said.

 

Tizeti is angling to solve that problem in Nigeria by offering unlimited internet access through its wi-fi towers at a cost that he claims is affordable for Africa’s emerging middle class.

The company’s subscriptions start at $30 per month.

Tizeti’s business model wrings cost efficiencies from every part of the development process, he said. The company saves money on siting and development by offering free wi-fi services to the owners of the land where the company builds its 100-foot-tall wi-fi towers. The towers themselves are powered by solar modules instead of electricity from the grid or an on-site generator.

In all, Ananyi says that the 35 towers his company built in Lagos cost roughly $7,000, vs. $17,000 per month if the company was using standard generators.

For the basic service, customers get wi-fi internet at a speed of 10mbps. Typical customers will use anywhere from 100 gigs to 1 terabyte per month, says Ananyi.

While the company’s service is typically offered in the home, it is getting set to debut a new universal hotspot service for anyone with a cell phone. That cheaper plan, can be made available to anyone who has a wi-fi enabled device, says Ananyi.

“It’s for downmarket users who can’t afford the setup costs for in-home,” says Ananyi.  “Anybody who wants to can connect and pay smaller amounts for a day a week or a month.”

For Ananyi, a former Microsoft employee who then went to work for ExxonMobil on some of their deepwater projects off the Nigerian coast, the chance to work on Internet infrastructure was the answer to a larger problem. He and his co-founder Ifeanyi Okonkwo, a former Blackberry employee, initially wanted to do video-on-demand.

“We found out that there wasn’t enough internet,” says Ananyi. “We decided a bigger opportunity was to go after the internet problem itself.”

Featured Image: Anton Balazh/Shutterstock

The Atomo Modular Electronics System is like LEGO for electronics

The content below is taken from the original (The Atomo Modular Electronics System is like LEGO for electronics), to continue reading please visit the site. Remember to respect the Author & Copyright.

One of the hardest things about Raspberry Pi and Arduino is figuring out where to stick all the pieces. While both of these systems work well alone – you can have a lot of fun with just a board and an Internet connection – it’s also fun to add little things like printers and screens to make fun projects. That’s where the Atomo comes in.

This modular kit comes from Jonathan Buford, a Hong Kong-based maker, and it’s certified by Arrow Electronics, a major manufacturer.

“Atomo is a replacement for Arduino and an accessory for Raspberry Pi,” said Buford. He hopes to make it easier to build more complex projects with Raspberry Pis without requiring extensive electronics know-how.

There are a number modules including I/O boards, network expansion boards, and even power supplies for bigger projects. For example, the project below has four I/O adapters and a power supply all connected to a Raspberry Pi. This means you can do some really interesting things with robotics and even hydroponics with the package the size of a hunk of cheese.

The kit costs $39 for early birds and should ship in June. You’ll be able to buy more packages and mix and match them as necessary.

“We’ve made all of our controllers compatible with the HAT connector on the Raspberry Pi. This lets you program on the Pi and update the controller. Or just use the Atomo as a modular HAT. This is perfect for ROS robots or any system where you have the Pi for processing or interface but need more power, IO, or real time control,” wrote Buford.

I, for one, welcome our Raspberry Pi and Arduino compatible robot overlords.

Windows Vista has just 30 days to live

The content below is taken from the original (Windows Vista has just 30 days to live), to continue reading please visit the site. Remember to respect the Author & Copyright.

In a month’s time, Microsoft will put Windows Vista to rest once and for all. If you’re one of the few people still using it, you have just a few weeks to find another option before time runs out.

After April 11, 2017, Microsoft will no longer support Windows Vista: no new security updates, non-security hotfixes, free or paid assisted support options, or online technical content updates, Microsoft says. (Mainstream Vista support expired in 2012.) Like it did for Windows XP, Microsoft has moved on to better things after a decade of supporting Vista.

As Microsoft notes, however, running an older operating system means taking risks—and those risks will become far worse after the deadline. Vista’s Internet Explorer 9 has long since expired, and the lack of any further updates means that any existing vulnerabilities will never be patched—ever. Even if you have Microsoft’s Security Essentials installed—Vista’s own antivirus program—you’ll only receive new signatures for a limited time.

The good news is that only a handful of computer users will have to make the switch. According to NetMarketshare, the desktop share of Windows Vista was just under 2 percent two years ago, in March, 2015. Today, it’s at 0.78 percent—about half of Windows 8’s 1.65 percent, according to the firm. (A certain percentage of Windows users simply don’t care, however; Windows XP’s market share stands above 8 percent, and support for that operating system expired in April, 2014.)

windows vista desktop gadgets Microsoft

Few may miss Windows Vista, but the desktop gadgets were awfully cute. 

Vista was never one of Microsoft’s beloved operating systems, although PCWorld reviewers were certainly kind. Annoyances like the User Access Control and the introduction of Digital Rights Management played a role in hurrying user adoption of its successor, Windows 7, though Vista’s desktop gadgets were certainly nice. (Extended support for Windows 7 ends in January, 2020, incidentally.)

Naturally, Microsoft hopes that any users moving from Windows Vista will migrate to Windows 10. Microsoft is even offering the Laplink migration software for half off, or $14.95. The important thing, though, is to move from Windows Vista to something more modern.

Why this matters: Even if you’re not part of the small group clinging to Windows Vista, its demise reinforces Microsoft’s efforts to pull Windows users into the present day. Other software companies are following suit: Firefox has let go of XP and Vista users. Google Drive is kicking them to the curb. Windows Vista isn’t safe, it wasn’t loved, and the risk that some site will steal your email or bank account information is real. It’s time to move on.

This story, “Windows Vista has just 30 days to live” was originally published by
PCWorld.

Join the Network World communities on

Facebook

and

LinkedIn

to comment on topics that are top of mind.

Disaster recovery for applications, not just virtual machines using Azure Site Recovery

The content below is taken from the original (Disaster recovery for applications, not just virtual machines using Azure Site Recovery), to continue reading please visit the site. Remember to respect the Author & Copyright.

Let’s say your CIO stops by one day and asks you, "What if we are hit by an unforeseen disaster tomorrow? Do you have the confidence to be able to run our critical applications on the recovery site, and guarantee that our users will be able to connect to their apps and conduct business as usual?" Note that your CIO is not going to ask you about just recovering your servers or virtual machines, the question is always going to be about recovering your applications successfully. So why is it that many disaster recovery offerings stop at just booting up your servers, and offer no promise of actual end to end application recovery? What makes Azure Site Recovery different that allows you as the business continuity owner to sleep better?

To answer this, let’s first understand what an application constitutes:

  • A typical enterprise application comprises of multiple virtual machines spanning different application tiers.
  • These different application tiers mandate write-order fidelity for data correctness.
  • The application may also require its virtual machines to boot up in a particular sequence for proper functioning.
  • A single tier will likely have two or more virtual machines for redundancy and load balancing.
  • The application may have different IP address requirements, either use DHCP or require static IP addresses.
  • Few virtual machines may require a public IP address or DNS routing for end user internet access.
  • Few virtual machines may need specific ports to be open or have security certificate bindings.
  • The application may rely on user authentication via an identity service like Active Directory.

To recover your applications in the event of a disaster, you need a solution that facilitates all of the above, gives you the flexibility to potentially do more application specific customizations post recovery, and do everything at an RPO and RTO that meets your business needs. Using traditional backup solutions to achieve true application disaster recovery is extremely cumbersome, error prone and not scalable. Even many replication based software only recover individual virtual machines and cannot handle the complexity of bringing up a functioning enterprise application.

Azure Site Recovery combines a unique cloud-first design with a simple user experience to offer a powerful solution that lets you recover entire applications in the event of a disaster. How do we achieve this?

With support for single and multi-tier application consistency and near continuous replication, Azure Site Recovery ensures that no matter what application you are running, shrink-wrapped or homegrown, you are assured of a working application when a failover is issued.

Many vendors will tell you that having a crash-consistent disaster recovery solution is good enough, but is it really? With crash consistency, in most cases, the operating system will boot. However, there are no guarantees that the application running in the virtual machines will work because a crash-consistent recovery point does not ensure correctness of application data. As an example, if a transaction log has entries that are not present in the database, then the database software needs to rollback until the data is consistent, in the process significantly increasing your RPO. This will cause a multi-tier application like SharePoint to have very high RTO, and even after the long wait it is still uncertain that all features of the application will work properly.

To avoid these problems, Azure Site Recovery not only supports application consistency for a single virtual machine (application boundary is the single virtual machine), we also support application consistency across multiple virtual machines that compose the application.

Most multi-tier real-world applications have dependencies, e.g. the database tier should come up before the app and web tiers. The heart and soul of the Azure Site Recovery application recovery promise is extensible recovery plans, that allow you to model entire applications and organize application aware recovery workflows. Recovery plans are comprised of the following powerful constructs:

  • Parallelism and sequencing of virtual machine boot up to ensure the right recovery order of your n-tier application.
  • Integration with Azure Automation runbooks that automate necessary tasks both outside of and inside the recovered virtual machines.
  • The ability to perform manual actions to validate recovered application aspects that cannot be automated.

Your recovery plan is what you will use when you push the big red button and trigger a single-click stress free end to end application recovery when needed, with a low RTO.

Azure Site Recovery - Recovery Plan

Another key challenge for many of these multi-tier applications to function properly is network configuration post recovery. With advanced network management options to provide static IP addresses, configure load balancers, or use traffic manager to achieve low RTOs, Azure Site Recovery ensures that user access to the application in the event of a failover is seamless.

Capture

A common myth around protecting your applications is the fact that many applications come with in-built replication technologies – hence the question, why do you need Azure Site Recovery?

The simple answer:

Replication != Disaster Recovery

Azure Site Recovery is Microsoft’s single disaster recovery product that offers you a choice to work with different first and third-party replication technologies, while providing an in-built replication solution for those applications where there is no native replication construct, or native replication does not meet your needs. As mentioned earlier, getting application data and virtual machines to the recovery site is only a piece of what is takes to bring up a working application. Whether Azure Site Recovery replicates the data or you use the application’s built-in capability for this, Azure Site Recovery does the complex job of stitching together the application, including boot sequence, network configurations, etc., so that you can failover with the single click. In addition, Azure Site Recovery allows you to perform test failovers (disaster recovery drills) without production downtime or replication impact, as well as failback to the original location. All these features work with both Azure Site Recovery replication and with application level replication technologies. Here are a few examples of application level replication technologies Azure Site Recovery integrates with:

  • Active Directory replication
  • SQL Server Always On Availability Groups
  • Exchange Database Availability Groups
  • Oracle Data Guard

So, you ask, what does this really mean? Azure Site Recovery provides you with powerful disaster recovery application orchestration no matter whether you choose to use its built-in replication for all application tiers or mix and match native application level replication technologies for specific tiers, e.g. Active Directory or SQL Server. Enterprises have various reasons why they may go with one or the other replication choice, e.g. tradeoffs between no data loss and cost and overhead of having an active-active standby deployment. The next time you get asked, why do you need Azure Site Recovery when say you already have SQL Server Always On Availability Groups, do make sure you clarify that having application data replicated is necessary but not sufficient for disaster recovery, and Azure Site Recovery complements native application level replication technologies to provide you a full end to end disaster recovery solution.

Application Level and Azure Site Recovery Replication in Single Recovery Plan

We have learnt from our enterprise customers who are protecting hundreds of applications using Azure Site Recovery, what the most common deployment patterns and popular application topologies are. So not only does Azure Site Recovery work with any application, Microsoft tests and certifies popular first and third-party application suites, a list that is constantly growing.

Azure Site Recovery Certified Applications

As part of this effort to test and provide Azure Site Recovery solution guides for various applications, Microsoft provides a rich Azure Automation library with production-ready, application specific and generic runbooks for most common automation tasks that enterprises need in their application recovery plans.

Azure Site Recovery Automation Gallery

Let’s close with a few examples:

  • An application like SharePoint typically has three tiers with multiple virtual machines that need to come up in the right sequence, and requires application consistency across the virtual machines for all features to work properly. Azure Site Recovery solves this by giving you recovery plans and multi-tier application consistency.
  • Opening a port / adding a public IP / updating DNS on an application’s virtual machine, having an availability set and load balancer for redundancy and load management, are examples of common asks of all enterprise applications. Microsoft solves this by giving you a rich automation script library for use with recovery plans, and the ability to set up complex network configurations post recovery to reduce RTO, e.g. setting up Azure Traffic Manager.
  • Most applications will need an Active Directory / DNS deployed and use some kind of database, e.g. SQL Server. Microsoft tests and certifies Azure Site Recovery solutions with Active Directory replication and SQL Server Always On Availability Groups.
  • Enterprises always have a number of proprietary business critical applications. Azure Site Recovery protects these with in-built replication and lets you test your application’s performance and network configuration on the recovery site using the test failover capability, without production downtime or replication impact.

With relentless focus on ensuring that you succeed with full application recovery, Azure Site Recovery is the one-stop shop for all your disaster recovery needs. Our mission is to democratize disaster recovery with the power of Microsoft Azure, to enable not just the elite tier-1 applications to have a business continuity plan, but offer a compelling solution that empowers you to set up a working end to end disaster recovery plan for 100% of your organization’s IT applications.

You can check out additional product information and start replicating your workloads to Microsoft Azure using Azure Site Recovery today. You can use the powerful replication capabilities of Azure Site Recovery for 31 days at no charge for every new physical server or virtual machine that you replicate, whether it is running on VMware or Hyper-V. To learn more about Azure Site Recovery, check out our How-To Videos. Visit the Azure Site Recovery forum on MSDN for additional information and to engage with other customers, or use the Azure Site Recovery User Voice to let us know what features you want us to enable next.