Mercedes unveils world’s first completely electric semi truck

The content below is taken from the original (Mercedes unveils world’s first completely electric semi truck), to continue reading please visit the site. Remember to respect the Author & Copyright.

European car companies are starting to invest more heavily in green vehicles. Audi unveiled three more electric cars on Monday and Porsche added 400 jobs to how many it estimates it will create to make its electric Model E come to life. Today, Daimler revealed its milestone that wasn’t in the consumer space: The first non-fossil-fuel big rig, the Mercedes-Benz Urban eTruck.

Like most electric vehicles, the eTruck is relatively whisper-quiet, especially compared to a typical diesel truck. With a weight capacity of 29 US tons (26 metric tonnes), it’s the first electric big rig concept to hit the road, beating out the semi Tesla has been working on that it announced last week.

Of course, big rigs move freight across long distances, so the eTruck’s current 124-mile maximum range likely won’t be adequate for long hauls. But the "Urban" prefix denotes its use case: As a clean, quiet load-bearing vehicle ideal for cities. Daimler has already heavily tested the utility of close-range hauling with its Fuso Canter E-Cell pilot program, sending the all-electric 4.8-tonne capacity light trucks around Portugal last fall. The eTruck scales that concept up to the loads and conditions typically endured by semis.

Daimler envisions that its electric truck won’t roll off the assembly lines until early in the next decade, according to their press release. By then, technological improvements will drive battery costs down by a factor of 2.5 and efficiency up by the same metric, the truck company estimates.

Via: Popular Mechanics

Source: Daimler Trucks press release

O365 Admin Center v3.0 Update

The content below is taken from the original (O365 Admin Center v3.0 Update), to continue reading please visit the site. Remember to respect the Author & Copyright.

Hello /r/office365!

I haven't posted in a while but I wanted to share O365 Admin Center v3 update which I believe to be the biggest one yet!


Description: The O365 Admin Center is an application written mainly in PowerShell that lets administrators easily and quickly manage their Office 365 environment. It allows partner accounts to connect to all of their tenants. You can manage Exchange Online, Skype For Business, SharePoint and Compliance Center.


LINKS


CHANGELOG:

You can view the entire changelog here. Listed below are some of the biggest improvements and new features

NEW:

  • Services: Manage ALL Office 365 services! Exchange Online, Compliance Center, SharePoint, Skype For Business. Select the service(s) you want to connect to, disconnect from services.

  • Services: Each individual service has its own tab with sorted commands, If you want to run an Exchange report simply go to to the Exchange Tab

  • Auto Updates: O365 Admin Center will now check to see if it’s on the latest version upon launch. You can also check manually within the program

  • Form: No more manually entering in the UPN when modifying something for a user! A popup will ask you for the user and have a combo-box with all users listed so all you have to do is select one!

  • Form: When selecting a date, such as getting quarantine information or message trace between two dates, you can now use a calendar to pick the beginning and ending dates instead of typing “03/04/2016”

  • Form: When adding a license to a user it will load the combo-box with users that are not licensed and not a list of all users. It will also load all licenses that have at least 1 available. Similarly, when removing a license from a user it will only load a combo-box of the users licenses. This same theme is same throughout the program.

  • Default Tabs: The first service you connect to will be the default tab. So if you only connect to Exchange it will load that tab as default after connecting. No need to select the tab manually.

  • Context Menu: Right click in the textbox to cut, copy, paste, clear screen, and select all

  • Form: Save As and Print features available. It will load the regular windows UI so you can select where to save or what printer to print to instead of typing it in manually.

  • Progress Bar: See the progress when doing items like disconnecting, connecting and more.

  • Safe Guards: When doing items such as deleting a user, deleting all users from recycle bin, change all user’s passwords, disabling ActiveSync for all it will ask if you are sure prior to running the task. If you say no it will not run. This is mainly set when changing something for all users

  • Calendar Permissions: When removing someone from a users calendar permissions, the combo box will load with all users that currently have permission on that users calendar

FIXED/CHANGED:

  • Export to File: Exporting results to a file lets the administrator browse their file system to find the location to save and name the file instead of entering it as plain text. Thanks to /u/byronnnn

  • Pre-Reqs: Changed pre-reqs if you want to manage SharePoint and Skype For Business

  • Fixed formatting issues on Windows 7 machines

  • When connecting to an account that has no tenants the combo box will be disabled along with the connect to tenant button


SCREENSHOTS


TIP! The results textbox allows custom command entry! Simply type a command or enter a full script in there and press "Enter" on your keyboard or the "Run Command" button, and the results will be passed to PowerShell into the PSSession with the results displayed on the same textbox!

Getting comfortable with cloud-based security: Who to trust to do what

The content below is taken from the original (Getting comfortable with cloud-based security: Who to trust to do what), to continue reading please visit the site. Remember to respect the Author & Copyright.

There are some bits of computing that you just don’t want to trust other people with. They’re just too sensitive. But at the same time, there are some things that people can do as well or better than you, for a lower cost.

Finding a balance between the two can be tricky, but useful. Take cybersecurity as an example.

It’s devilishly difficult to do the whole thing properly, because there are too many moving parts. Some of these moving parts are easily identifiable and clearly defined.

Others extend far beyond off-the-shelf products, reaching into your organization in unpredictable and nuanced ways. Outsourcing some of the clearly defined tasks can free you up to concentrate on the stickier bits of cybersecurity that are subtler and more specific to your company.

Everyday security tasks

The everyday tasks are the things that you should be running on every packet in your network, such as filtering emails, checking attachments for malware, and watching where your employees surf online to stop their browsers getting nobbled. These things have been done in-house for years in a couple of ways, but each has its disadvantages.

Firstly, you can buy a variety of best-of-breed tools to handle these various tasks independently, and then spend all your time configuring and maintaining them, and trying to get a single view of what they’re all doing. That takes expertise which many firms – especially SMBs – don’t have.

Secondly, you can buy a single product, like a unified threat management appliance, that claims to do most of the heavy lifting for you. These devices are often configured once and then maintained by the vendor, in what amounts to a managed CPE deal. That can work well, but unless you have some kind of financing deal you might find yourself investing a decent amount of capital in the thing, and will then have to upgrade it occasionally. It also means that you’re locked in with a single vendor.

Moving these everyday security tasks to the cloud is becoming an increasingly viable alternative. Cloud-based security service providers have been gradually nibbling away at cyber security services, processing on your network packets before they reach your premises.

Analysts think that this market is set for growth. IDC reckoned at the end of 2014 that a third of all security will be delivered online by 2018. Should you move these cyber security measures to the cloud? Here are a few things to think about.

Cost

The savings when switching from capex to opex can often be irresistible, but will your cloud security provider save you money in the long run? If you’re paying by the seat, as most users of basic cloud security services do, think about your expansion plans and crunch the numbers to find out when the service becomes more expensive than the other two options. Don’t end up suffering from ‘cloud shock’ by underestimating the cost for the entire user base – now, or in the future.

Scope

One of the key benefits of cloud-based security services is complexity reduction, so check to see which services are covered. You may have to cherry pick a couple of cloud cyber security vendors to get the full feature set that you want. That has ramifications for…

…Visibility

The quality of reporting should feature heavily in your cloud cyber security strategy. Visibility is a function of coverage here. The cloud can be a “fire and forget”option, but ideally you’ll want to get an understanding of what’s happening to your network traffic, along with some threat intelligence data. Some visibility into the cloud provider’s own operation wouldn’t go amiss either – are there any planned or unplanned outages? Can you check the billing cycle and easily file support tickets?

Getting clever

Depending on your risk tolerance, you can move more advanced cyber security functions into the cloud, such as identity management. Endpoint security and server-based intrusion prevention are also on the radar, although these can entail the use of on-premise software agents.

If you’re taking cloud cyber security beyond the basics to this level, then start to ask how it might integrate with any existing cyber security resources that you have left in-house, such as your security information and event management system, or any inline hardware-based IPS monitoring internal network traffic that you don’t yet want to give up.

If cloud is the way you go, it will free you up to think more strategically about your cyber security stance. Tools alone don’t make an organisation safe. The real challenge lies in a few other critical areas. These stickier cyber security tasks include user awareness, which is a deep challenge that goes way beyond a couple of finger-wagging education sessions.

Process refinement is another. Formalised patch testing and management is one of a set of strategies that can help to eliminate 85 per cent of intrusions, according tothe Australian Signals Directorate – although this is now available as a managed service, too.

Putting cybersecurity in the cloud can make sense, but the journey involves an understanding of the economics involved, the capabilities that you want to outsource – and how you’re going to bolster your security still further with the spare cash and human resources left behind. ®

Sponsored:
2016 Cyberthreat defense report

Here are the key security features coming to Windows 10 next week

The content below is taken from the original (Here are the key security features coming to Windows 10 next week), to continue reading please visit the site. Remember to respect the Author & Copyright.

While there’s a lot of talk about Windows 10’s new features for consumers, the forthcoming Anniversary Update also adds a pair of advanced security capabilities aimed at helping IT managers better lock down the computers in their organization.

Windows Information Protection aims to make it possible for organizations to compartmentalize business and personal data on the same device. It comes alongside the general release of Windows Defender Advanced Threat Protection, a system that uses machine learning and Microsoft’s cloud to better protect businesses after their security has been breached.

The two features are part of Microsoft’s push to position Windows 10 as an operating system for security-conscious companies at a time when attacks against businesses seem to be more prevalent than ever. That could be a major selling point at a time when Microsoft is working hard to try and drive companies to deploy the new OS.

Using Windows Information Protection, companies can encrypt their data on employee devices using keys that are controlled by IT. Doing so is supposed to bring several benefits, including the ability to selectively wipe only company data from a personal device when an employee leaves the company.

Companies can also set policies about which applications can be used to handle business data, so users can’t live-tweet the content of a company’s HR system, for example. The whole system is designed to bring Windows 10 in line with the reality that many employees use their mobile devices for both personal and business use.

For businesses to use Windows Information Protection, they’ll need a Windows 10 Enterprise E3 subscription, which costs $7 per user per month.

While Windows Information Protection is designed to help proactively guard company data, Windows Defender Advanced Threat Protection is supposed to help companies detect and contain security breaches. It uses a combination of software running on client devices and a Microsoft cloud service to alert companies when it looks like their systems have been hacked.

Once the system has detected a breach, it suggests steps that IT managers can take to solve the problem. That’s important, according to Rob Lefferts, director of program management for Windows Enterprise and Security. Attackers will often try to place multiple back doors in a company’s systems once they’ve broken in, and failing to get them out will cause problems.

Windows Defender ATP requires a company be subscribed to the more expensive Windows 10 Enterprise E5 service, which is meant for companies looking for premium Windows 10 add-on features. Microsoft won’t disclose the pricing publicly, but said its companies can find out more by asking one of its partners.

It will be interesting to see how these features affect the rate at which businesses adopt Windows 10 — if at all. Microsoft is betting big on security’s enterprise appeal to try and get businesses to upgrade, but these advanced capabilities require both an investment of time and money in order to get off the ground.

What if you could run the same, everywhere?

The content below is taken from the original (What if you could run the same, everywhere?), to continue reading please visit the site. Remember to respect the Author & Copyright.

Posted by Miles Ward, Global Head of Solutions

Is multi-cloud a pipe dream? I think not!

From startups to enterprises, despite material increases in efficiency and the price to performance ratio of the compute, network and storage resources we all use, infrastructure continues to come at substantial cost. It can also be a real risk driver; each implementation choice affects future scalability, service level and flexibility of the services being built. It’s fair to say that “future-proofing” should be the primary concern of every system architect.

Providers of infrastructure aren’t disinterested actors either; there are huge incentives for any vendor to increase lock-in through contractual, fiscal and technical constrictions. In many cases, interest in cloud infrastructure, particularly existing consumers of infrastructure, has been driven by a huge urge to break free of existing enterprise vendor relationships for which the lock-in costs are higher than the value provided. Once they have some kind of lock-in working, infrastructure companies know that they can charge higher rents without necessarily earning them.

So, how can you swing the power dynamic around so that you, as the consumer of infrastructure, get the most value out of your providers at the lowest cost?

A good first step is to actively resist lock-in mechanisms. Most consumers have figured out that long-term contractual commitments can be dangerous. Most have figured out that pre-paid arrangements distort decisionmaking and can be dangerous. Technical lock-in remains one of the most difficult to avoid. Many providers wrap valuable differentiated services in proprietary APIs so that applications eventually get molded around their design. These “sticky services” or “loss leaders” create substantial incentives for tech shops to take the shorter path to value and accept a bit of lock-in risk. This is a prevalent form of technical debt, especially when new vendors release even more powerful and differentiated tools in the same space, or when superior solutions rise out of OSS communities.

In the past, some companies tried to help users get out from under this debt by building abstraction layers on top of the proprietary APIs from each provider, so that users could use one tool to broker across multiple clouds. This approach has been messy and fragile, and tends to compromise to the lowest-common denominator across clouds. It also invites strategic disruption from cloud providers in order to preserve customer lock-in.

Open architectures

Thankfully, this isn’t the only way technology works. It’s entirely possible to efficiently build scaled, high performance, cost-efficient systems without accepting unnecessary technical lock-in risk or tolerating the lowest-common denominator. You can even still consume proprietary infrastructure products, as long as you can prove to yourself that because those products expose open APIs, you can move when you want to. This is not to say that this isn’t complex, advanced work. It is. But the amount of time and effort required is shrinking radically every day. This gives users leverage; as your freedom goes up, it becomes easier and easier to treat providers like the commodities they ought to be.

We understand the value of proprietary engineering. We’ve created a purpose built cloud stack, highly tuned for scale, performance, security, and flexibility. We extract real value from this investment, through our advertising, applications as well as our cloud businesses. But GCP, along with some other providers and members of the broader technology community, recognize that when users have power, they can do powerful things. We’ve worked hard to deliver services that are differentiated by their performance, stability and cost, but not by proprietary, closed APIs. We know this means that you can stop using us when you want to; we think that gives you the power to use us at lower risk. Some awesome folks have started calling this approach GIFEE or “Google Infrastructure For Everyone Else. But given the overwhelming participation and source code contributions — including those for kubernetes — from individuals and companies of all sizes to the OSS projects involved, it’s probably more accurate to call it Everyone’s Infrastructure, For Every Cloud — unfortunately that’s a terrible acronym.

A few salient examples:

Applications can run in containers on Kubernetes, the OSS container orchestrator that Google helped create, either managed and hosted by us via GKE, or on any provider, or both at the same time.

Kubernetes ensures that your containers aren’t locked in.



Web apps can run in a PaaS environment like AppScale, the OSS application management framework, either managed and hosted by us via Google AppEngine, or on any provider, or both at the same time. Importantly this includes the NoSQL transactional stores required by apps, either powered by AppScale, which uses Cassandra as a storage layer and vends the Google App Engine Datastore API to applications, or native in App Engine.

AppScale ensures that your apps aren’t locked in.



NoSQL k-v stores can run Apache HBase, the OSS NoSQL engine inspired by our Bigtable whitepaper, either managed and hosted by us via Cloud Bigtable, or on any other provider, or both at the same time.

HBase ensures that your NoSQL isn’t locked in.

OLAP systems can be run using Druid or Drill, two OSS OLAP engines inspired by Google’s Dremel system. These are very similar to BigQuery, and allow you to run on any infrastructure.  
Druid and Drill ensure that your OLAP system isn’t locked in.

Advanced RDBMS can be built in Vitess, the OSS MySQL toolkit we helped create, either hosted by us inside Google Container Engine, or on any provider via Kubernetes, or both at the same time. You can also run MySQL fully managed on GCP via CloudSQL.

Vitess ensures that your relational database isn’t locked in.



Data orchestration can run in Apache Beam, the OSS ETL engine we helped create, either managed and hosted by us via Cloud Dataflow, or on any provider, or both at the same time.

Beam ensures that your data ETL isn’t locked in.



Machine Learning can be built in TensorFlow, the OSS ML toolkit we helped create, either managed and hosted by us via CloudML, or on any provider, or both at the same time.

TensorFlow ensures that your ML isn’t locked in.



Object storage can be built in Minio.io which vends the S3 API via OSS, either managed and hosted by us via GCS which also emulates the S3 API, or on any provider, or both at the same time.

Minio ensures that your object store isn’t locked in.

Continuous Deployment tooling can be delivered using Spinnaker, a project started by the Netflix|OSS team, either hosted by us via GKE, or on other providers, or both at the same time.

Spinnaker ensures that your CD tooling isn’t locked in.


What’s still proprietary, but probably OK?

CDN, DNS, Load Balancing

Because the interfaces to these kinds of services are network configurations rather than code, so far these have remained proprietary across providers. NGINX and Varnish make excellent OSS load balancers/front-end caches, but because of the low friction low risk switchability, there’s no real need to avoid DNS or LB services on public clouds. Now, if you’re doing some really dynamic stuff and writing code to automate, you can use Denominator by Netflix to help keep your code connecting to open interfaces.

File Systems

These are still pretty hard for cloud providers to deliver as managed services at scale; Gluster FS, Avere, ZFS and others are really useful to deliver your own POSIX layers irrespective of environment. If you’re building inside Kubernetes, take a look at CoreOS’s Torus project.

It’s not just software, it’s the data
Lock-in risk comes in many forms, one of the most powerful being data gravity or data inertia. Even if all of your software can move between infrastructures with ease, those systems are connected by a limited, throughput constrained internet and once you have a petabyte written down, it can be a pain in the neck to move. What good is software you can move in a minute if it takes a month to move the bytes?

There are lots of tools that help, both native from GCP, and from our growing partner ecosystem.

  • If your data is in an object store, look no further than the Google Storage Transfer Service, an easy automated tool for moving your bits from A to G.
  • If you have data on tape or disk, take a look at the Offline Media Import/Export service. You might need to regularly move data to and from our cloud, so take a look at Google Cloud Interconnect for leveraging carriers or public peering points to connect reliably with us.
  • If you have VM images you’d like to move to cloud quickly we recommend Cloud Endure to move and transform your images for running on Google Compute Engine.
  • If you have a database you need replicated, take a look at Attunity CloudBeam. If you’re trying to migrate bulk data, try FDT from CERN.
  • If you’re doing data imports, perhaps Embulk.

Conclusion

We hope the above helps you choose open APIs and technologies designed to help you grow without locking you in. That said, remember that the real proof you have the freedom to move is to actually move; try it! Customers have told us about their new-found power at the negotiating table when they can demonstrably run their application across multiple providers.

All of the above mentioned tools, in combination with strong private networking between providers, allow your applications to span providers with a minimum of provider-specific implementation detail.

If you have questions about how to implement the above, about other parts of the stack this kind of thinking applies to, or about how you can get started, don’t hesitate to reach out to us at Google Cloud Platform, we’re eager to help.

Why the Need for the PureRDS.org Resource Site

The content below is taken from the original (Why the Need for the PureRDS.org Resource Site), to continue reading please visit the site. Remember to respect the Author & Copyright.

Small Shops Running SBC Solutions Are Not Well Supported In the Server-Based Computing (SBC) community (e.g. Citrix, Microsoft RDS, VMware Horizon,… Read more at VMblog.com.

Windows 10’s Anniversary Update makes a great OS better

The content below is taken from the original (Windows 10’s Anniversary Update makes a great OS better), to continue reading please visit the site. Remember to respect the Author & Copyright.

"It’s nice, for once, to be able to recommend a new version of Windows without any hesitation." That’s how I summarized my review of Windows 10 last year, and for the most part, it’s lived up to my expectations. Other than Microsoft’s bafflingly forceful automatic upgrade policy (which has led to lawsuits and plenty of ticked off users), the operating system’s first year on the market has been relatively smooth.

Microsoft says the software is now running on over 350 million devices worldwide, and it’s seeing the highest customer satisfaction ratings ever for a Windows release. So expectations are running pretty high for the Windows 10 Anniversary Update, which arrives August 2nd. But while it definitely delivers some useful upgrades to key features like Cortana and Windows Ink, don’t expect any massive changes to Windows 10 as a whole.

Cortana

Expect to see Microsoft’s virtual assistant just about everywhere in the Anniversary Update. Cortana is accessible through the lock screen, allowing you to ask simple questions or do things like play music, without even having to log in. She’ll also control some apps like iHeartRadio and Pandora, with voice commands. (Unfortunately, there’s no Spotify support yet.)

Perhaps most intriguingly, Cortana will also work across different platforms, with the ability to talk to Windows Phone and Android devices. You’ll be able to see notifications from your phone right on the Windows desktop, as well as alerts like when your phone is running low on battery. While there’s a Cortana app on iOS, this extensive integration won’t be available to iPhone users just yet. Microsoft reps say one reason for that is that it’s simply harder to implement it on Apple’s platform.

Cortana is also getting the smarts to act like a real assistant. Just like before, you can send her reminders and have her recall them at any point. Now, you’ll also be able to add photos to those reminders, as well as create them from Windows apps directly. And yes, those reminders carry over to Cortana’s mobile apps too. They’re particularly useful for things like frequent flyer numbers or complex parking spot locations, where asking your phone to look it up is easier than searching through your notes manually. She can also search within your documents for specific bits of text.

While I still find Google Now to be more accurate at listening to voice commands, Cortana stands out as the only voice-powered digital assistant on a desktop OS. Apple’s Siri will be the highlight of MacOS Sierra this fall when it’s officially released (though you can try it in beta form now), but Cortana still has that beat feature-wise.

Windows Ink

With the Windows 10 Anniversary Update, Ink finally steps out from behind the scenes for stylus users with an interface all its own. Clicking the eraser button on the Surface Pen, for example, brings up a new menu on the right side of the screen. From there, you can create a Sticky Note (basically a digital Post-It), access a blank sketch pad or jot notes down on a screenshot of whatever you’re looking at. Other active stylus models will have access to the feature too, and you’ll even be able to use it with a keyboard and mouse (right-click on the taskbar and choose "Show Windows Ink Workspace" button).

While it’s still fairly rudimentary, the current Ink interface is a lot more useful than what Microsoft offered in the past. Previously, hitting the Surface Pen’s eraser button would simply open up a blank OneNote document. It was great for people who liked to sketch or jot down handwritten notes, but that was about it. I’ve found myself using the stylus even more now with the Surface Pro 4 to create Sticky reminders, or simply caption an image to share with friends.

Just like Cortana, you can also access all of the new Ink features from the lock screen. So if you have to take some emergency notes for class, or simply want to jot down a burst of inspiration, you won’t have to wait to log into Windows to do so.

Windows Hello

Microsoft’s biometric authentication feature is branching out from the lock screen to let you sign into apps like DropBox and iHeartradio. It’ll even log you into some websites when you’re using the Edge browser. Hello was one of the best additions to Windows 10, so it was only a matter of time until its zippy login capabilities spread throughout the OS.

Still, the problem with Windows Hello is actually being able to use it. Fingerprint sensors and depth-sensing cameras (like Intel’s RealSense) still aren’t all that common. You’ll find them on the Surface machines and some high-end notebooks and tablets, but you can forget about them if you’re on a budget. And if you’re using a desktop, you’re even worse off. You can buy a third-party fingerprint sensor, but it won’t be as fast or accurate as the hardware used inside phones. And, for some reason, external depth-sensing cameras are still practically non-existent (unless you pay through the nose for a RealSense developer device).

At this point, Microsoft doesn’t have an answer to the lack of Windows Hello-compatible hardware out there. But company reps say they hope that once Microsoft adds more features to Windows Hello, manufacturers will feel more compelled to add the necessary hardware.

Microsoft Edge

Remember all the promises of browser extension support on Edge? Well, they’re finally here with the Anniversary Update. You’ll be able to choose from a handful of popular options like LastPass, AdBlock, Pocket and Evernote’s Clipper. The selection was pretty limited during my testing, but hopefully developers will adopt Edge’s extensions quickly. Microsoft claims that Edge is more power efficient now (something it already touted over its competitors), and it has even more support for newer web standards.

Start Menu and other changes

Rather than just highlighting a few apps in the Start Menu, the Anniversary Update brings all of your installed apps into a single (and very long) drop-down list. It might seem a bit overwhelming to new users, but it saves power users an extra click when they need to peruse their apps. Live Tiles are smarter now as well: If you click on a news app displaying a specific story, you’ll be directed to that story once the app launches. Sure, neither change is as drastic as the return of the Start Menu, but they’re still helpful tweaks.

The Anniversary Update also marks the first time Microsoft has made Bash command line support for Ubuntu Linux available in Windows. That’s not something most users will notice, but it’s a boon for developers.

Wrap-up

If you were expecting a huge change with the Windows 10 Anniversary Update, then you’ll probably be disappointed here. But, in a way, its lack of any major additions says a lot about how much Microsoft got right when it first launched Windows 10. It’s a stable, secure and fast OS. The Anniversary Update simply makes it better, and that’s something I think every PC user will appreciate.

Technical Committee Highlights July 17, 2016

The content below is taken from the original (Technical Committee Highlights July 17, 2016), to continue reading please visit the site. Remember to respect the Author & Copyright.

This update is dedicated to our recent in-person training for the Technical Committee and additional community leaders.

Reflections on our Leadership Training Workshop

In mid-July 2016, 20 members of the OpenStack community including Technical Committee members current and past, PTLs current and past, other Community members and additional supporting facilitators met for a 2 day training class at ZingTrain in Ann Arbor Michigan. The ZingTrain team, joined by founder Ari Weinzweig, inspirational IT manager Tim Root, and various members of the Zingerman’s servant leaders shared a unique approach to business that matches OpenStack’s quite well.

I’m not usually one to gush about a training workshop. But wow, please allow me to gush. To tell the truth, many of us went to this workshop despite thinking we might have to learn how to make sandwiches. Or talk about feelings instead of solving problems. Or even decide too much at once without enough input. And so on. But we got over our fears and had a great time and excellent food and service from Zingerman’s ZingTrain team. Many thanks to Ann Lofgren and Timo Anderson for facilitating the workshop, and much gratitude and admiration to Colette Alexander for recognizing a match and meeting a need.

What did we learn?

We learned about stewardship and put it in the context of OpenStack. I had to look up a definition for this term, even though I’m a word nerd. Stewardship is an “ethic that embodies the responsible planning and management of resources” and it matches so well with what we need to do as leaders in the OpenStack community. The group decided we needed to continue this sort of work after the training and so the stewardship working group has formed. We met for the first time this week and listed many items on a to-do list that we’ll cull and prioritize as we go.

IMG_0981

We learned what servant leadership looks like, when instead of a top-down triangle to represent an organization’s hierarchy, you invert it and have all of your leaders serve their teams. The basic idea is that it is the role of a leader to serve the organization, and you aren’t promoted in order to have others serve you. In the article, A recipe for servant leadership, Ari Weinzweig explains servant leadership with: “To paraphrase John Kennedy’s magnificent 1961 inaugural speech; ‘ask not what your organization can do for you, ask what you can do for your organization.’” In an open source community like ours, this looks like PTLs who review code in order to teach and onboard more contributors instead of writing all the code herself, or generally supporting the needs of team so they can achieve their best.

We learned about consensus, where you first agree to how decisions are made. Consensus can also mean that 18 people who are business partners might not all completely agree on the solution, but are enough convinced by their peers to fully support the decision and make it happen. We are certainly provoked by this concept, considering that we use majority voting today, and TC members will abstain if they can’t agree with a resolution. We want to work on this aspect going forward.

Each of us took away important aspects of consensus-made decisions. At ZingTrain we learned that the original agreement between their partners was to “live with” and not simply “live by” group decisions, even when the decision wasn’t their personal first choice. In this way, the goal with consensus is not to reach a point where a majority opinion ensures people do what the decision indicates, but to ensure that everyone is as comfortable with the decision itself. This aspect felt especially important in the OpenStack context, where people may feel they are putting up with a decision that was made, but haven’t inherently agreed to live with it. We also learned that another component of this decision-making process was that any disagreements must include a counter-proposal.

IMG_0982

We learned about visioning. Now, realize that vision training is an entire subset and workshop in itself. We needed to learn first and get introduced to the idea. What is an effective vision? How can we put what we’ve learned into an OpenStack context?

In a vision, you describe the future state you want as if it happened. A vision is:

  • Inspiring to all that are involved in implementing it.
  • Strategically sound, as in, we have a decent shot at making the vision reality.
  • Documented. You must write it down to make it real and make it work.
  • Communicated. You have to document it but not expect people only to read it, you have to tell your community about your vision.

What if a vision for OpenStack was as detailed as:

It’s a sunny day in fall 2025 in Vancouver, Canada, yet a lot of technologists are lined up to get their RFID wristbands that open doors to conference sessions at the OpenStack Summit. The autonomous vehicles donated by a large OpenStack deployer are emptying outside, bringing them together to plan for the next release of OpenStack. The past release was a huge success and the 100,000 deployments upgraded smoothly with no downtime to end users.

As one of the founders of Zingerman’s Ari Weinzweig notes in this Inc. magazine article, “To be clear, a vision is not a strategic plan. The vision articulates where we are going; the plan tells us how we’re actually going to get there.” A vision for OpenStack is not a strategic plan, yet without it, planning is difficult. Also, decision-making is difficult, and consensus can’t be reached, due to debates not only on, “Where are we going?” but multiplied by questions on, “How do we get there?” We definitely had wheels turning after thinking about this together, as a group, distraction-free, and face-to-face.

IMG_0979 2

It was a great week. What we want to do next is become a more effective technical committee and leadership group. We want to apply the various aspects of servant leadership by becoming better examples of servant leaders ourselves. We’ve identified gaps in our current leadership implementation, and are working to close them so that we take action based on what we’ve learned. We’ve started the Stewardship Working Group. We are drafting documentation to write down our guiding principles, to write down release goals, and generally keep documenting and teaching. We were all challenged to teach two hours a week and learn at least one hour a week, and I think we all are ready to take on the challenge to lead by example by starting with ourselves. Please let us know how we are doing, give us feedback, and shape our community vision with us.

OpenStack Developer Mailing List Digest July 2-22

The content below is taken from the original (OpenStack Developer Mailing List Digest July 2-22), to continue reading please visit the site. Remember to respect the Author & Copyright.

SuccessBot Says

  • Notmyname: the 1.5 year long effort to get at-rest encryption in openstack swift has been finished. at-rest crypto has landed in master
  • stevemar: API reference documentation now shows keystone’s in-tree APIs!
  • Samueldemq: Keystone now supports Python 3.5
  • All

Troubleshooting and ask.openstack.org

  • Keystone team wants to do troubleshooting documents.
  • Ask.openstack.org might be the right forum for this, but help is needed:
    • Keystone core should be able to moderate.
    • A top level interface than just tags. The page should have a series of questions and links to the discussions for that question.
  • There could also be a keystone-docs repo that would have:
    • FAQ troubleshooting
    • Install guides
    • Unofficial blog posts
    • How-to guides
  • We don’t want a static troubleshooting guide. We want people to be able to ask questions and link them to answers.
  • Full thread

Leadership Training Recap and Steps Forward

  • Colette Alexander has successfully organized leadership training in Ann Arbor, Michigan.
  • 17 people from the community attended. 8 of them from the TC.
  • Subjects:
    • servant leadership
    • Visioning
    • Stages of learning
    • Good practices for leading organizational change.
  • Reviews and reflections from the training have been overwhelmingly positive and some blogs started to pop up [1].
  • A smaller group of the 17 people after training met to discuss how some ideas presented might help the OpenStack community.
    • To more clearly define and accomplish that work, a stewardship working group has been proposed [2].
  • Because of the success, and 5 TC members weren’t able to attend, Colette is working to arrange a repeating offer.
  • Thanks to all who attended and the OpenStack Foundation who sponsored the training for everyone.
  • Full thread

Release Countdown For Week R-12, July 11-15

  • Focus:
    • Major feature work should be well under way as we approach the second milestone.
  • General notes:
    • We freeze release libraries between the third milestone and final release.
      • Only emergency bug fix updates are allowed during that period.
      • Prioritize any feature work that includes work in libraries.
  • Release actions:
    • Official projects following any of the cycle-based release models should propose beta 2 tags for their deliverables by July 14.
    • Review stable/liberty stable/mitaka branches for needed releases.
  • Important dates:
    • Newton 2 milestone: July 14
    • Library release freeze date starting R-6, Aug 25
    • Newton 3 milestone: September 1
  • Full thread

The Future of OpenStack Documentation

  • Current central documentation
    • Consistent structure
    • For operators and users
    • Some less technical audience use this to evaluate with various other cloud infrastructure offerings.
  • Project documentation trends today:
    • Few are contributing to central documentation
    • More are becoming independent with their own repository documentation.
    • An alarming number just don’t do any.
  • A potential solution: Move operator and user documentation into individual project repositories:
    • Project developers can contribute code and documentation in the same patch.
    • Project developers can work directly or with a liaison documentation team members to improve documentation during development.
    • The documentation team primarily focuses on organization/presentation of documentation and assisting projects.
  • Full thread

 

[1] – http://bit.ly/2aqKMjg

[2] – http://bit.ly/2ap3uuH

 

Meet the Micro One

The content below is taken from the original (Meet the Micro One), to continue reading please visit the site. Remember to respect the Author & Copyright.

The perfect computer for anyone who likes Airfix models. Or Ikea furniture. A name not many people in the RISC OS world will have heard of until very recently is Ident Computer, run by Shrewsbury-based Tom Williamson. Tom founded his company, the Ident Broadcasting & Communications Group, in 2006, and Ident Computer is a comparatively […]

New imaging method reveals how Alzheimer’s reshapes the brain

The content below is taken from the original (New imaging method reveals how Alzheimer’s reshapes the brain), to continue reading please visit the site. Remember to respect the Author & Copyright.

Researchers at Yale University have led development in to a new type of brain scan designed to detect changes in synapses associated with common brain disorders. Until now, researchers have only be able to detect these changes during autopsies, but by combining a Positron Emission Tomography (PET) scan with a new type of injectable tracer, Yale radiology and biomedical imagining professor Richard Carson was able to measure the synaptic density in a living brain. According to the findings published in Science Translational Medicine Wednesday, the technique could help doctors better understand and treat a wide range of neurological conditions from epilepsy to Alzheimer’s disease.

To achieve their goal, Dr. Carson and his team developed a new radioactive tracer that binds with a key protein in the synapses of the brain. The tracer is visible through a traditional PET scan and Dr. Carson’s team applied a mathematical formula to the results to determine the synaptic density. According to the university, the imaging technique has already been used on both baboon and human subjects, and it has already been used to show lower synaptic density in three patients with epilepsy compared to healthy individuals.

"This is the first time we have synaptic density measurement in live human beings," Dr. Carson said. "Up to now any measurement of synaptic density was postmortem."

Moving forward, Carson and the team believe this method could be use to track the progression of degenerative brain disorders or to track the effectiveness of medications meant to slow the loss of neurons. The team is already planning to use the same method in similar studies for Alzheimer’s schizophrenia, depression and Parkinson’s disease.

The last new VCR will be manufactured this month

The content below is taken from the original (The last new VCR will be manufactured this month), to continue reading please visit the site. Remember to respect the Author & Copyright.

sanyo_vcr
It’s 2016 and videocassette recorder (VCR)  using VHS tapes are still being manufactured and sold. I was surprised to discover that fact, but I only discovered it because manufacturing is ending and the […]

Azure Usage and Billing Portal Released

The content below is taken from the original (Azure Usage and Billing Portal Released), to continue reading please visit the site. Remember to respect the Author & Copyright.

azure hero

azure heroIn today’s Ask the Admin, I’ll take a closer look at the recently released Azure Usage and Billing Portal.

Azure Usage and Billing Portal (Image Credit: Microsoft)

Azure Usage and Billing Portal (Image Credit: Microsoft)

Understanding Azure resource use, and more importantly, how much it’s costing, can be a difficult puzzle to unravel, especially when you are dealing with multiple subscriptions. At the end of 2015, Microsoft released a set of APIs providing programmatic access to this information so that developers could create reports, a portal summarizing usage and billing information, or just ensure that resources weren’t being consumed after a project has finished by generating alerts.

But unless you had the necessary skills and resources to develop your own portal, this wasn’t a solution that provided an easy way to visualize usage and billing data. It’s difficult to say that Microsoft has solved this issue, but has come to the rescue partly with the release of the Azure Usage and Billing Portal – a set of resources that can be deployed in Azure to provide visual access to usage and billing data.

Sponsored

Rather than a readymade dashboard, the Azure Usage and Billing Portal is a set of open-source building blocks that enables you to deploy a portal based on a Power BI dashboard. And while it doesn’t require any scripting knowledge as such, it may not be suitable for small organizations that don’t have some experience of dealing with IT infrastructure directly.

The portal can retrieve up to 3 years’ worth of usage data from multiple subscriptions. All you need to do is register subscriptions with the system using a valid username and password. The portal system consists of a public website where subscriptions can be registered, a dashboard website where authenticated users can view usage and billing data for registered subscriptions, and behind the scenes there’s AzureSQL Server and an Azure Storage Queue that holds data requests.

Alerts and Additional Reporting

Hopefully Microsoft will at some point in the future package this solution to provide an out-of-the-box experience that everyone can access easily. But for now, the Azure Usage and Billing Portal should appeal to enterprises and service providers. Microsoft has promised to continue developing the solution, with the ability to trigger alerts based on usage anomalies, additional reporting capabilities, and subscription rate codes. And if PowerShell scares you, Microsoft also wants to simplify deployment of the portal system, along with many other improvements.

Deploy the Azure Usage and Billing Portal

Sponsored

The Azure Usage and Billing Portal project can be found on GitHub here, along with detailed setup instructions and a PowerShell script that can be used to set up the necessary Azure resources.

The post Azure Usage and Billing Portal Released appeared first on Petri.

Businesses are rushing into IoT like lemmings

The content below is taken from the original (Businesses are rushing into IoT like lemmings), to continue reading please visit the site. Remember to respect the Author & Copyright.

Companies are rapidly adopting IoT even though many don’t know if they’re getting a good return on their investment.

Two-thirds of companies are now using or planning to use IoT, according to a global survey by research firm Strategy Analytics. That’s up from just 32 percent last year.

But 51 percent of those aren’t sure whether the new technology is paying off, said Laura DiDio, an analyst at the firm.

That doesn’t necessarily mean the internet of things isn’t saving them money or improving their businesses, DiDio said. But many organizations evaluate and deploy new technologies in such a fragmented way that they don’t know the full effects of their actions. It’s actually better with IoT than with most other new technologies, where an even higher percentage can’t measure the benefits, she said. But a disorganized approach isn’t helpful in any case.

IoT comes in so many forms that it can fly in under the radar. In building management, for example, it may just come as an added feature for an existing system and never get labeled as IoT. Or one department will seek out IoT on its own.

In other cases, the CEO hears about IoT and and dictates that the company will adopt it without even examining the costs and benefits, DiDio said. That’s not the best way to go about it.

“You have to get all the stakeholders involved from the get-go, and often that doesn’t happen,” DiDio said.

Any localized IoT deployment can have broader implications because of things like data security, which was the top technical challenge of 56 percent of survey respondents.

And while data analytics is a common motivation for deploying the technology, many companies aren’t ready to make use of what they’re collecting. The survey showed that 42 percent found they had too much data to analyze it all efficiently. Meanwhile, 27 percent weren’t sure what questions to ask about the information, and 31 percent simply don’t store any IoT data.

“We’re still very much at the early stages of the learning curve,” DiDio said. “It’s challenging.”

Though a majority of companies have some IoT now, only 25 percent have an end-to-end deployment. Most companies will need vendors, systems integrators or consultants to help them get there, DiDio said. Big IT players are doing their part to cover the bases by forming partnerships and buying smaller, specialized companies.

The survey results came from 350 respondents, including small, medium and large enterprises. They’re using IoT for tasks that include video surveillance, smart building controls and health care.

Happy Birthday! #OpenStack Turns 6

The content below is taken from the original (Happy Birthday! #OpenStack Turns 6), to continue reading please visit the site. Remember to respect the Author & Copyright.

HAPPY BIRTHDAY OPENSTACK! Has it been 6 years already? Over those last six years, cloud technologies in general have been improving and business… Read more at VMblog.com.

New POPs available for all CDN integrated Azure Media Services customers

The content below is taken from the original (New POPs available for all CDN integrated Azure Media Services customers), to continue reading please visit the site. Remember to respect the Author & Copyright.

Last November 2015 we annouced the general availability of new CDN delivering POPs and pricing zones in India, Autralia and South America for Azure CDN customers. These new POPs are now available for all CDN integrated Azure Media Services customers. All the new POPs have been enabled for existing CDN integrated AMS streaming endpoints.

In addition, Azure Media Services customers can now enable CDN for streaming endpoints created from all regions (India, Australia, South America, Canada etc.). Note that CDN is a global service. All the CDN POPs will be turned on by default when you enable CDN from any region.

What’s next 

In the next few months we’ll work on an integrated solution to provide premium tier and multiple CDN options to AMS customers. Today customers can go to Azure CDN service directly to use these capabilities.

Additional resources

Attach a Data Disk to a VM in the Azure Portal

The content below is taken from the original (Attach a Data Disk to a VM in the Azure Portal), to continue reading please visit the site. Remember to respect the Author & Copyright.

Server Hero

Server Hero

In this Ask the Admin, I’ll show you how to attach a data disk to a virtual machine (VM) in the new Azure Management Portal.

Adding an additional disk to an Azure VM was easy and intuitive in the classic Azure management portal, but while the new portal has some benefits, sometimes the endless array of options and sliding panels makes it harder to find basic configuration options.

There are lots of reasons why you might want to attach additional disks to VMs in Azure, but one common situation is the requirement to host Active Directory database files on a volume that doesn’t use write caching. By default, Azure deploys OS volumes with write caching enabled, and this can cause issues, such as data loss, with some applications.

If you’ve never deployed a VM in Azure before, see Deploy VMs Using Azure Resource Manager on the Petri IT Knowledgebase. Azure Resource Manager (ARM) is the default deployment model for provisioning VMs, and other resources, in Azure using the new management portal. For more information on Azure resource groups, see What are Microsoft Azure Resource Groups? on Petri.

Create and attach a data disk to a VM

Before starting, you need to already have a virtual machine deployed in Azure. It doesn’t necessarily need to be running.

  • Log in to the Azure management portal here.
  • In the Azure management portal, click Virtual machines in the list of options on the left.
  • In the Virtual machines pane, click the VM you want to attach a disk to.
  • A dashboard for the VM will appear, and the VM’s Settings panel to the right. In the Settings panel, click Disks under GENERAL.
Adding a disk to a VM in Azure (Image Credit: Russell Smith)

Adding a disk to a VM in Azure (Image Credit: Russell Smith)

Sponsored

  • In the Disks pane, click the Attach new icon. Optionally, you can choose to add an existing disk.
  • In the Attach new disk pane, change the disk parameters to meet your needs. You can choose between standard and premium SSD disk, if your VM size supports it; change the disk size, location, and caching options: None, Read only, and Read/Write.
  • Once you’re done adjusting the settings, click OK at the bottom of the Attach new disk pane.
Adding a disk to a VM in Azure (Image Credit: Russell Smith)

Adding a disk to a VM in Azure (Image Credit: Russell Smith)

Creating and attaching the disk might take a few minutes, and once the task is complete, you should see a notification appear in the top right of the management portal. The new disk will also appear in the Disks pane if you haven’t already closed it.

Sponsored

Adding a disk to a VM in Azure (Image Credit: Russell Smith)

Adding a disk to a VM in Azure (Image Credit: Russell Smith)

In this article, I showed you how to create and attach a new disk to an existing virtual machine in Azure.

The post Attach a Data Disk to a VM in the Azure Portal appeared first on Petri.

ThinPrint Management Services Now Available to Automate the Management of Large Windows Print Environments

The content below is taken from the original (ThinPrint Management Services Now Available to Automate the Management of Large Windows Print Environments), to continue reading please visit the site. Remember to respect the Author & Copyright.

ThinPrint , manufacturer of the leading print management software, today released ThinPrint Management Services, a powerful and nimble tool to… Read more at VMblog.com.

If managing PCs is still hard, good luck patching 100,000 internet things

The content below is taken from the original (If managing PCs is still hard, good luck patching 100,000 internet things), to continue reading please visit the site. Remember to respect the Author & Copyright.

Internet of Things (IoT) hype focuses on the riches that will rain from the sky once humanity connects the planet, but mostly ignores what it will take to build and operate fleets of things.

And the operational side of things could be hell.

“IT can barely keep their desktops patched,” Splunk chief technology officer Snehal Antani told The Register, “How will they keep their devices patched?”

Once answer we’ve heard from Amazon Web Services’ (AWS’) is to that dumb things might just be a smart idea.

The cloud colossus’ APAC technology head Glenn Gore recently told El Reg AWS likes the idea of dumb things collecting data at the edge, then sending all the information they collect into a cloud for processing. Sure, you’ll cop hefty data ingress and storage bills. But you won’t have to worry about sending new firmware or security updates to enormous fleets of things. Nor will malware on things be an issue.

A rival idea comes from the likes of Intel, which fancies x86-powered boxen on the edge. Smart things can be told to do harder things, as even if weeny little x86s like the Curie and Edison modules have enough power to do some pre-processing to figure out what data it’s worth subjecting to proper analysis. Intel even likes the idea of a gateway to collect data from a cluster of things. Gateway devices will run a more powerful x86 that provides local network services and does even more pre-processing of thing-generated data. Once data from the edge and.or gateway hits a Xeon-powered server farm somewhere, those servers will be working on data worthy of the effort. If that Xeon farm in in AWS, well and good: Intel sold a Xeon wherever the server runs.

Both visions have hefty quantities of self interest. But both are also valid because thing-derived data is going to be used in at least two ways.

To understand why, consider a data centre in which heat sensors are deployed among the racks.

Tom Anderson, data centre infrastructure solutions manager at Emerson Network Power, says there’s not much point building a smart heat sensor because a single thermometer can’t understand the context of what it “sees”. Using the example of a nicely-metered data centre, he points out that one hot server doesn’t mean a whole rack or aisle is in trouble.

“We then must either still perform the overall analytics at a more central point or add intercommunication between all devices,” Anderson says. “From an operations standpoint, it makes much more sense to analyse data from 1,000 places in one spot than create complex intercommunication and processing across 1,000 devices.”

He therefore reckons dumb things and a smart core will find plenty of buyers.

But Splunk’s Antani points out that some things are going to be on the front lines and used for post-event analysis. Choosing one “thingatechture” therefore won’t be much because you’ll need both real-time and later reports.

Antani asked El Reg to imagine an oil rig on which, when something looks dangerous, you want red lights going off fast and loud. But you also want historical data to analyse performance over a long time. The need for both outcomes, plus complications like the high cost of shipping data over satellite, means we’ll end up with a mix of smart, dumb, aggregated and warehoused data from things.

Cisco’s Helder Antunes, a senior director in the Borg’s corporate strategic innovations group, told us another complication may be that you won’t always own your own things. Antunes thinks service providers will create networks of things-as-a-service. Which Cisco calls “Fogs”.

“Many may wonder about how OpenFog architectures will impact the cost structure of IoT implementations,” Antunes opined by email, acknowledging that “Industry experts … question if moving capabilities close to edge devices may incur greater hardware and maintenance costs.”

“Cisco believes that in some cases the additional capabilities will allow greater monetization of use cases at the edge. In doing this, the costs will be easily offset by what the architecture will offer.  Also in some cases, by aggregating at the fog level, industries may actually save money by using ‘dumber’ sensors and in essence offload those capabilities to the fog nodes.”

Or in other words, all sorts of thingatechtures* are going to emerge. Sometimes you’ll have the option to use dumb things, but probably won’t escape the need to manage some stuff on the edge. Sometimes you’ll have smart things to wrangle. And sometimes you’ll need to blend smart, dumb, real-time, downstream Big Data and more.

This means that any vendor that’s ever found a way to make an endpoint manageable is salivating at the prospect of the internet of things. As are clouds, because not all of you will need a full-time data rig. And so are folks like Intel who contribute to smart devices.

See? The internet of things really will make riches fall from the sky. And you, dear readers, get to be the ones who figure out how best to scoop them. ®

* Okay, we won’t use that one again.

Sponsored:
Global DDoS threat landscape report

Seagate unveils a 10TB hard drive for your home PC

The content below is taken from the original (Seagate unveils a 10TB hard drive for your home PC), to continue reading please visit the site. Remember to respect the Author & Copyright.

It’s no longer far-fetched to buy a 10TB hard drive if you’re running a server, but what if you need gobs of space for your games and 4K videos at home? Seagate has your back after today, as it’s releasing a trio of 10TB drives (mainly focused on networked and surveillance storage) that include one designed for your desktop. The new Barracuda Pro doesn’t do anything remarkable beyond its capacity (it’s a standard 3.5-inch, 7,200RPM disk), but that still means getting a massive amount of room in a drive that’s meant for a run-of-the-mill PC. Just brace yourself for the cost. Seagate pegs the 10TB Barracuda Pro’s price at $535 — it’ll be tempting to settle for ‘just’ an 8TB disk unless you know you need as much storage as possible in a single drive bay.

Source: Seagate

Expedient Shrinks Cages to Make Data Center Space Cheaper

The content below is taken from the original (Expedient Shrinks Cages to Make Data Center Space Cheaper), to continue reading please visit the site. Remember to respect the Author & Copyright.

Expedient has come up with a new design for colocation cages which it claims will lower the costs of data center space for its customers.

The design, called SlimLine, essentially shrinks the cage to more closely match the shape of equipment racks. Typical colocation cages have a lot of space inside them that isn’t used by equipment, usually enough for technicians to move around, even when all the actual rack space is utilized.

Expedient’s new cage design, which it hopes to patent, is a way to utilize data center space more efficiently. Instead of leaving extra space inside the cage, techs have full access to front and back of the equipment via roll-up doors, which can be either solid or perforated, depending presumably on the equipment’s cooling needs.

The data center provider said it will tailor the cages to customer needs. The smallest cage available will closely match the physical profile of a row of four cabinets.

Multiple-row configurations include enclosed hot aisles that contain hot server exhaust air.

Customers save because they end up using less data center space to house the same amount of equipment than they would in traditional colocation cages. Those savings can reach as much as 40 percent, Expedient CTO, Ken Hill, said in a statement.

Here’s a photo of a two-row SlimLine configuration in one of Expedient’s data centers, featuring a perforated access wall and a solid door in the hot aisle (photo by Expedient):

expedient slimline two row config

Deploy Remote Desktop Services using PowerShell

The content below is taken from the original (Deploy Remote Desktop Services using PowerShell), to continue reading please visit the site. Remember to respect the Author & Copyright.

Windows 7 Hero Logo

Windows 7 Hero Logo

In today’s Ask the Admin, I’ll show you how to deploy Remote Desktop Services in Windows Server 2012 R2 using PowerShell.

In a previous Ask the Admin, Installing Remote Desktop Services in Windows Server 2012 R2, I demonstrated how to deploy Remote Desktop Services (RDS) using the standard deployment model. For more information on the RDS components and deployment models, take a look at Remote Desktop Services Deployment Options in Windows Server 2012 R2 on the Petri IT Knowledgebase.

Remote Desktop Services in Windows Server 2012 R2 (Image Credit: Russell Smith)

Remote Desktop Services in Windows Server 2012 R2 (Image Credit: Russell Smith)

Deploy RDS using PowerShell

Servers that you want to use in your deployment need to be added to the Server Pool in Server Manager before you start the process below. For more info, see Managing Windows Server 2012 with Server Manager on Petri. You’ll need an Active Directory domain and an account that has permission to install the server roles on your chosen server(s). The RD Connection Broker role can’t be installed on a domain controller, and installation should not be carried out on the server where the RD Connection Broker role will be installed.

Sponsored

  • Log into the Windows Server 2012 R2 server where you want to run the PowerShell cmdlets. The account should have administrative access to the server(s) where the RDS roles will be installed.
  • Open a PowerShell prompt from the taskbar or Start screen.
  • In the PowerShell window, type Import-Module RemoteDesktop and press ENTER.
  • To install the three compulsory RDS components in a standard deployment, use the New-SessionDeployment cmdlet as shown below, replacing the values for the -ConnectionBroker, -WebAccessServer, and -SessionHost parameters with the name(s) of the servers on which you want to install these roles.
New-SessionDeployment -ConnectionBroker srv1.ad.contoso.com -WebAccessServer srv1.ad.contoso.com -SessionHost srv1.ad.contoso.com

If you want to manage the new deployment using Server Manager, you’ll need to add the server where the RD Connection Broker role is installed to the Server Pool and then click Remote Desktop Services in the list of options on the left of Server Manager. For more information on working with Server Manager, see Managing Windows Server 2012 with Server Manager on Petri.

Installing Remote Desktop Services using PowerShell in Windows Server 2012 R2 (Image Credit: Russell Smith)

Installing Remote Desktop Services using PowerShell in Windows Server 2012 R2 (Image Credit: Russell Smith)

If you want to add an additional RD Session Host or RD Licensing, you can use the Add-RDServer cmdlet as shown below:

Add-RDServer -Server srv1.ad.contoso.com -Role RDS-RD-SERVER -ConnectionBroker

You should specify an existing RD Connection Broker server, although it is also possible to add an additional Connection Broker to your deployment (maximum 2), and the -Role parameter value should be set to RDS-RD-SERVER to install a RD Session Host, or RDS-LICENSING to install the RD Licensing role.

Installing Remote Desktop Services using PowerShell in Windows Server 2012 R2 (Image Credit: Russell Smith)

Adding a RD Session Host using PowerShell in Windows Server 2012 R2 (Image Credit: Russell Smith)

Sponsored

If you opt to install RD Licensing, you’ll need to use the management console on the licensing server to activate the server and install your licenses from Microsoft. Once you have done that, you can use PowerShell to associate the new license server with your existing Connection Broker using the Set-RDLicenseConfiguration cmdlet. The -Mode parameter can be set to PerDevice or PerUser. Select ‘Y’ for yes to confirm the operation.

Set-RDLicenseConfiguration -LicenseServer srv1.ad.contoso.com -Mode PerUser -ConnectionBroker srv1.ad.contoso.com

In the next article in this series, I’ll show you how to configure RDS collections so you can get started publishing RemoteApps.

The post Deploy Remote Desktop Services using PowerShell appeared first on Petri.

Coin-sized COM could be world’s smallest Raspberry Pi clone

The content below is taken from the original (Coin-sized COM could be world’s smallest Raspberry Pi clone), to continue reading please visit the site. Remember to respect the Author & Copyright.

ArduCam unveiled a 24 x 24mm module with the ARM11-based core of the original Raspberry Pi, available with 36 x 36mm carriers with one or two camera links. The promised second-generation version of the Raspberry Pi Compute Module featuring the same quad-core, 64-bit Broadcom BCM2837 SoC as the Raspberry Pi 3 will be out within […]

Introduction to Azure Automation Desired State Configuration

The content below is taken from the original (Introduction to Azure Automation Desired State Configuration), to continue reading please visit the site. Remember to respect the Author & Copyright.

powershell-hero-img

powershell-hero-img

In today’s Ask the Admin, I’ll explain the ins and outs of Azure Automation Desired State Configuration.

Azure Automation Desired State Configuration (DSC) is composed of two key technologies: Azure Automation, a cloud service that’s been around for a couple of years, and PowerShell DSC, a declarative syntax based on PowerShell that allows system administrators to define device configuration.

Azure Automation

If you’re not already familiar with Azure Automation, it’s a management platform for automating and maintaining cloud resources using a PowerShell-based workflow engine (runbooks). Azure Automation can be used to automate and schedule routine tasks, such as starting and stopping virtual machines, restarting web services or doing anything that is supported by Azure PowerShell. And just like PowerShell, the platform is extensible, so in theory, any internet-connected service or platform can be managed.

For more information on Azure Automation, see Getting Started with Microsoft Azure Automation and How to Use Microsoft Azure Automation on the Petri IT Knowledgebase.

PowerShell DSC

PowerShell Desired State Configuration is similar to Puppet and Chef, and is used for configuring servers and preventing configuration drift. Rather than scripting a configuration, for instance install this component, set registry keys and then reboot the server, DSC uses a declarative syntax that defines how servers should be configured without specifying a list of tasks needed to achieve the result. It’s like Group Policy on steroids, allowing servers to be configured without specialist knowledge of how components should be installed.

Sponsored

For more information on DSC, see Why PowerShell’s Desired State Configuration Should Matter to You and Deploying a Desired State Configuration Web Host Using PowerShell on Petri.

Azure Automation DSC

One of the problems with DSC is that to be really useful, it requires some infrastructure, usually in the form of a pull server from which nodes retrieve configurations, and even then doesn’t scale well. That’s where Azure Automation DSC comes in. When you create an Azure Automation account, a DSC pull and reporting server are automatically configured from which your cloud or on premise Windows and Linux VMs (nodes) can get MOF files, meaning that you don’t need to have a VM running 24/7 for the purposes of DSC.

A key advantage of Azure Automation DSC is the cloud-based pull server that’s automatically deployed and managed by Microsoft, but there are also a host of other features. The service allows organizations to control who can access DSC configurations and assign them to nodes. Changes to configuration can also be tracked, recording when and how configurations are applied to nodes. There’s also a reporting server so you can check for VM compliance against your configurations.

It’s also possible to combine the use of Azure Automation runbooks and DSC. For example, runbooks can come in useful if you want to coordinate a process and configure VMs as part of a larger operation.

Pricing

Azure Automation DSC comes in Free and Basic tiers, and is charged according the number of nodes registered with the pull server. The free tier supports up to five nodes, after which you need to switch to the basic tier, which costs $6/month per node. Click here for more information on pricing.

Sponsored

Look out for more articles on Petri soon, where I’ll show you how to work with Azure Automation DSC.

The post Introduction to Azure Automation Desired State Configuration appeared first on Petri.

What is Office 365 (2016)

The content below is taken from the original (What is Office 365 (2016)), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft.com

Office 365 products
You may have been hearing the term Office 365 around your office or maybe you have been seeing it as the subject of online articles, but what is Office 365 exactly? This post is designed to give a general overview of what Office 365 is for individuals and business in 2016.
Simply put, Office 365 is a subscription to Microsoft’s Office productivity suite. There are many different Office 365 plans; however, they fall into two categories. These main categories are plans for individuals/families and plans for organizations. Any person or organization can go to Office.com and sign up for an Office 365 plan. What you get and what you pay will vary widely depending on which plan you choose.

Is there a plan for me and/or my organization?

Yes. Microsoft has Office 365 plans for: individuals, families, educational organizations, governments, small decentralized businesses, huge traditional corporations, and everything in between. From the smallest plan to the biggest they all offer access to Microsoft software and/or services related to productivity.

Plans for individuals and families

The smallest plan is Office 365 Personal for $6.99 a month (or $69.99 a year). For this price you gain access to the most up-to-date desktop Office programs (PC or Mac), 1TB of OneDrive storage, and 60 minutes of Skype calling. This plan is perfect for anyone who uses the Office programs on one or several devices because documents can be stored locally and synced using OneDrive. This plan makes financial sense for the 1TB cloud storage alone since the same amount of space is $8.25 per month on Dropbox and $9.99 per month on Google Drive.
The only other plan for individuals/families is Office 365 Home for $9.99 a month (or $99.99 a year). Office 365 Home is basically the Personal plan times five. Instead of sharing a login and password with your family members, this plan can (and should) be shared with five people who each get their own login and their own Office programs, 1TB of cloud storage, and 60 Skype minutes each. This plan is a no-brainer for families especially when there are children in school.

Did someone say free?

Can you get Office for free? Yes (Office Online that is). Microsoft offers their Outlook.com email service, 5GB of OneDrive cloud storage, and Office Online all for free. With Office Online you can create documents stored in OneDrive, share them, and co-author documents. If you only use a small set of Office features, then Office Online might work perfect for you. One great thing about Microsoft’s free offerings is they upgrade smoothly into paid plans if you ever need the additional features or storage.

Plans for organizations

Microsoft has done their best to keep their Office 365 plans simple, but the truth is that organizations are complex. There are sub groups within the “organizations” category including: education, government, small/medium business, and enterprise. These plans are very similar to the Personal and Home plans, but add Exchange, Skype for Business, and some bonus features.
These plans offer some combination of Exchange, Office Online, Office desktop programs, Skype for Business, OneDrive for Business cloud storage, SharePoint, and management tools. Usually people associate Office 365 with the traditional Office desktop programs. However, some of the low cost plans are cloud based. Most of the mid-tier plans offer most of what people expect in the form of Office programs. The high-tier (the E5) plans contain nearly everything Microsoft has to offer with only a few niche programs or services not offered.

What Office 365 provides

The Office Offerings Today

The Office Offerings Today

Most Office 365 plans offer some combination of OneDrive for Business, email, Skype for Business, Office Online and the Office desktop programs. Microsoft does a good job of clearly laying out everything each plan offers. If you want a detailed list of what each plan has to offer, then check out Office.com. One thing included in every Office 365 plan is Microsoft 24/7 tech support. This is an important perk which is critical if your business does not have a dedicated IT staff.
Sponsored

Office 2016 desktop programs

Obtaining access to the classic Office desktop programs is a huge reason why you might subscribe to Office 365. The programs offered through the subscription are Word, PowerPoint, Excel, Outlook, Access, and Publisher. OneNote is completely free with or without a subscription, so I left it off the list. Office 2016 bears a striking resemblance to Office 2007 and up, but the UI is more refined and there are a host of new features. Anyone comfortable with Office should be more than capable to pick up any Office 2016 program fairly easily. The Office 2016 suite recently received a massive update centered around teamwork. Features like simultaneous co-authoring, document sharing and via OneDrive, a cloud-first storage solution.
Similar products to the Office 2016 desktop programs: Open Office, LibreOffice, iWork, etc.

Office Online

Office Online VS Office 2016

Office Online compared to Office 2016

Many small businesses do not have the need for the feature packed Office 2016. Instead, they prefer to work in the cloud and use Office Online. This product allows you to create, edit, save, share, and collaborate on documents completely within a web browser. Files are saved in OneDrive and can be emailed as attachments when needed. Microsoft has been investing in Office Online for years and has a product that many businesses would be completely satisfied with. Documents created in Office Online are 100% compatible with all the Office desktop programs, and documents retain their fidelity.
Similar products to Office Online: Google Drive (Docs, Sheets, and Slides)

Office mobile apps

The mobile office for many companies is a reality and Microsoft is there to support you with their suite of mobile apps. Targeting iOS, Android, and Windows; Microsoft has built tools that enable rich document editing and sharing from any device. With a subscription to Office 365 these apps can create save and share Word, Excel, and PowerPoint documents. When you save a document using OneDrive (or OneDrive for Business), documents sync between devices and maintain their fidelity even when going back and forth between mobile app or desktop program.
Similar products to Office Mobile Apps: iWork (Pages, Numbers, Keynote), Google Drive (Docs, Sheets, Slides), etc.

1TB (1024 GB) OneDrive storage

microsoft-onedrive

Microsoft’s solution for storing documents, photos, videos, music, and other files online is called OneDrive. OneDrive is deeply integrated into many of the Office Programs and comes pre-installed on Windows. With Office 365, Microsoft gives each user 1TB of cloud storage for whatever they want to store online. Files stored in the cloud can be accessed via OneDrive.com and the wide range of OneDrive apps on every major platform. This means you get 1TB for each user on your plan to use as their own.
Similar products to OneDrive: Dropbox, Box, Google Drive, iCloud, etc.

Email

A major function of Office for many business and some consumers is access to professional email. With Personal and Home plans you can enjoy an ad-free web email which can be set up when activating the plan. The organization plans are a little different and while most offer Exchange email service, not all do. The email offered by Microsoft is extremely high quality and should be more than enough for the vast majority of users and business.
Similar products to Microsoft’s Email offerings: Gmail, Yahoo Mail, etc.

60 Skype minutes (consumer)

One of the lesser known features of Office 365 is one hour of Skype calling. This enables you to call landlines or cell phones from the Skype client and/or web browser. You can even set up Skype to display your cell or landline when calling other numbers so they know who is calling. It is worth noting that these Skype Minutes cannot be used to send SMS messages. Also, if you are outside the US, double check with this list before making any long calls: HERE
Similar products to Skype (consumer): Facebook Messenger, Apple FaceTime, Google Hangouts, Viber, etc.

Skype for Business

Skype For Business UI

Skype For Business UI

Similar to the Skype everyone knows (but actually rebranded Lync), Skype for Business is Microsoft’s professional communication tool. Skype for Business supports audio and video calls with sophisticated features like screen sharing and remote control. In some enterprise plans, Skype for Business can broadcast meetings (like a webinar). Fully featured Skype for Business can serve as the PBX for an entire enterprise and enable truly seamless mobility.
Similar products to Skype for Business: GoToMeeting, WebX, etc.

Enterprise collaboration

Working together in many companies can be a major chore when teams grow beyond a few members and a single location. To assist distributed teams to work together, Microsoft has SharePoint, Outlook Groups, Yammer, Delve, and Planner. These products help teams organize their documents on a team site, keep their communication in a shared inbox, and divide work into granular to-dos. This part of Office 365 has been seeing lots of activity from Microsoft as they beef up their collaboration offerings. If you would like to learn more about these products check out this series on Petri: HERE.
Similar products to Microsoft’s Enterprise Collaboration: IBM Websphere, Atlassian’s Confluence, Igloo, etc.

Advanced management tools

The competition is tough for Microsoft when it comes to selling their Office suite, but when it comes to advanced management tools, Microsoft is in a class of their own. A few of the high level Office 365 plans for organizations include advanced tools that help the IT staff to understand their users and manage devices and data. Tools like Advanced Security Management will use machine learning to identify and neutralize threats. Many of these sophisticated tools were added to Microsoft’s portfolio by way of acquisitions. Microsoft is very serious about protecting their customers’ data from hackers, abuse, failure, and governments, and these tools are offered to show it. In a world where data leaks are all too common, Microsoft wants to give companies the best tools available. Even non-malicious data loss like accidental file deletion or poorly storing sensitive information can be prevented using Microsoft’s tools.
Sponsored

Is Office 365 a good fit for you?

Probably yes. When you consider the Office programs, OneDrive storage, Skype minutes, and best-in-class management tools, Microsoft Office 365 is a good deal for nearly everyone. There are a few special cases where people can get by using free alternatives, but not most people. Microsoft is very responsive to their users and continues to invest heavily into all of their Office products. As a subscriber, you will constantly be kept up to date and worry free. If you have more questions about Office 365, comment below and I’ll do my best to find the answers.

The post What is Office 365 (2016) appeared first on Petri.