The Tiny SCSI Emulator

The content below is taken from the original (The Tiny SCSI Emulator), to continue reading please visit the site. Remember to respect the Author & Copyright.

For fans of vintage computers of the 80s and 90s, SCSI can be a real thorn in the side. The stock of functioning hard drives is dwindling, and mysterious termination issues are sure to have you cursing the SCSI voodoo before long. Over the years, this has led to various projects that aim to create new SCSI hardware to fill in where the original equipment is too broken to use, or too rare to find.

[David Kuder]’s tiny SCSI emulator is designed for just this purpose. [David] has combined a Teensy 3.5 with a NCR5380 SCSI interface chip to build his device. With a 120MHz clock and 192K of RAM, the Teensy provides plenty of horsepower to keep up with the SCSI signals, and its DMA features don’t hurt either.

Now, many earlier SCSI emulation or conversion projects have purely focused on storage – such as the SCSI2SD, which emulates a SCSI hard drive using a microSD card for storage. [David]’s pulled that off, maxing out the NCR5380’s throughput with plenty to spare on the SD card end of things. Future work looks to gain more speed through a SCSI controller upgrade.

But that’s not all SCSI’s good for. Back in the wild times that were the 80s, many computers, and particularly the early Macintosh line, were short on expansion options. This led to the development of SCSI Ethernet adapters, which [David] is also trying to emulate by adding a W5100 Ethernet shield to his project. So far the Cabletron EA412 driver [David] is using is causing the Macintosh SE test system to crash after initial setup, but debugging continues.

It’s always great to see projects that aim to keep vintage hardware alive — like this mass repair of six Commodore 64s.

Filed under: computer hacks

Posted on in category News

New MusicBrainz Virtual Machine released

The content below is taken from the original (New MusicBrainz Virtual Machine released), to continue reading please visit the site. Remember to respect the Author & Copyright.

We’ve finally released a new MusicBrainz virtual machine! This new version has become a lot more automated and is much easier to create and deploy. Hopefully we will be doing monthly releases of the VM from here on out.

lot of things have changed for this new release. If you have used the VM before, you MUST read the instructions again. Before filing a bug or asking us a question, please re-read the documentation first!

Ready to jump in? Read the instructions.

Filed under: musicbrainz

Posted on in category News

OpenStack Developer Mailing List Digest December 17-23

The content below is taken from the original (OpenStack Developer Mailing List Digest December 17-23), to continue reading please visit the site. Remember to respect the Author & Copyright.

SuccessBot Says

Release Countdown for Week R-8, 26-30 December

Storyboard Lives

 

[1] – http://bit.ly/2fo2lCK

[2] – http://bit.ly/2i32GOl

[3] – http://bit.ly/2iaxkCX

[4] – http://bit.ly/2i2Svt4

[5] – http://bit.ly/2iaquxf

Posted on in category News

Microsoft Intune: Windows 10 Device Enrollment

The content below is taken from the original (Microsoft Intune: Windows 10 Device Enrollment), to continue reading please visit the site. Remember to respect the Author & Copyright.

Windows 10 Hero Good

Windows 10 Hero Good

In today’s Ask the Admin, I’ll show you how to enable device enrollment in Microsoft Intune and enroll a Windows 10 PC.

Microsoft Intune is a lightweight cloud-based PC and mobile device management product that uses Mobile Device Management (MDM), a set of standards for managing mobile devices, instead of Active Directory (AD) Group Policy, which is a Windows-only technology. For more information about Intune, see Introduction to Microsoft Intune on the Petri IT Knowledgebase.

 

 

Windows 10 PCs connect with Azure Active Directory and are then automatically enrolled in Intune. Before you can complete the instructions below, you will need both a trial Intune account and Azure Active Directory (Premium) subscription. Although the accounts are free for the trial period, credit card details are required to sign up for Azure AD Premium. I recommend creating an Intune account first, and then using the same account details to create an Azure AD Premium subscription. This will ensure that the Azure AD Directory is associated with your Intune subscription.

Assign User Licenses

The first step is to assign at least one user an Intune license. Licensing is managed from the Office 365 management portal.

Assign an Intune license to a user (Image Credit: Russell Smith)

Assign an Intune license to a user (Image Credit: Russell Smith)

In the list of users, make sure that one of them has Intune A Direct listed in the status column. This might be the admin user for your Intune subscription or another user.

Assign an Intune license to a user (Image Credit: Russell Smith)

Assign an Intune license to a user (Image Credit: Russell Smith)

Configure MDM Auto-Enrollment in Azure AD

To ensure that devices are automatically enrolled with Intune when they join Azure AD, you must configure MDM auto-enrollment for the directory.

Configure MDM Autoenrollment in Azure AD (Image Credit: Russell Smith)

Configure MDM Auto-enrollment in Azure AD (Image Credit: Russell Smith)

Configure MDM Autoenrollment in Azure AD (Image Credit: Russell Smith)

Configure MDM Autoenrollment in Azure AD (Image Credit: Russell Smith)

In a production environment, you’re more likely want to control which devices are managed using Intune with Azure AD groups.

Enable Windows 10 Device Enrollment

The next step is to enable specific device platforms that can enroll in Intune. This is done from the Intune management portal.

Enable Windows 10 Device Enrollment (Image Credit: Russell Smith)

Enable Windows 10 Device Enrollment (Image Credit: Russell Smith)

Enroll a Windows 10 Device

Now that MDM is set up for Windows devices in Intune, you can connect a Windows 10 device to Azure AD and it will automatically be enrolled to Intune.

Enroll a Windows 10 Device (Image Credit: Russell Smith)

Enroll a Windows 10 Device (Image Credit: Russell Smith)

Enroll a Windows 10 Device (Image Credit: Russell Smith)

Enroll a Windows 10 Device (Image Credit: Russell Smith)

Sponsored

In this article, I showed you how to set up automatic device enrollment in Microsoft Intune, and how to enroll and Windows 10 device.

The post Microsoft Intune: Windows 10 Device Enrollment appeared first on Petri.

Posted on in category News

Microsoft Intune: Windows 10 Device Enrollment

The content below is taken from the original (Microsoft Intune: Windows 10 Device Enrollment), to continue reading please visit the site. Remember to respect the Author & Copyright.

Windows 10 Hero Good

Windows 10 Hero Good

In today’s Ask the Admin, I’ll show you how to enable device enrollment in Microsoft Intune and enroll a Windows 10 PC.

Microsoft Intune is a lightweight cloud-based PC and mobile device management product that uses Mobile Device Management (MDM), a set of standards for managing mobile devices, instead of Active Directory (AD) Group Policy, which is a Windows-only technology. For more information about Intune, see Introduction to Microsoft Intune on the Petri IT Knowledgebase.

 

 

Windows 10 PCs connect with Azure Active Directory and are then automatically enrolled in Intune. Before you can complete the instructions below, you will need both a trial Intune account and Azure Active Directory (Premium) subscription. Although the accounts are free for the trial period, credit card details are required to sign up for Azure AD Premium. I recommend creating an Intune account first, and then using the same account details to create an Azure AD Premium subscription. This will ensure that the Azure AD Directory is associated with your Intune subscription.

Assign User Licenses

The first step is to assign at least one user an Intune license. Licensing is managed from the Office 365 management portal.

Assign an Intune license to a user (Image Credit: Russell Smith)

Assign an Intune license to a user (Image Credit: Russell Smith)

In the list of users, make sure that one of them has Intune A Direct listed in the status column. This might be the admin user for your Intune subscription or another user.

Assign an Intune license to a user (Image Credit: Russell Smith)

Assign an Intune license to a user (Image Credit: Russell Smith)

Configure MDM Auto-Enrollment in Azure AD

To ensure that devices are automatically enrolled with Intune when they join Azure AD, you must configure MDM auto-enrollment for the directory.

Configure MDM Autoenrollment in Azure AD (Image Credit: Russell Smith)

Configure MDM Auto-enrollment in Azure AD (Image Credit: Russell Smith)

Configure MDM Autoenrollment in Azure AD (Image Credit: Russell Smith)

Configure MDM Autoenrollment in Azure AD (Image Credit: Russell Smith)

In a production environment, you’re more likely want to control which devices are managed using Intune with Azure AD groups.

Enable Windows 10 Device Enrollment

The next step is to enable specific device platforms that can enroll in Intune. This is done from the Intune management portal.

Enable Windows 10 Device Enrollment (Image Credit: Russell Smith)

Enable Windows 10 Device Enrollment (Image Credit: Russell Smith)

Enroll a Windows 10 Device

Now that MDM is set up for Windows devices in Intune, you can connect a Windows 10 device to Azure AD and it will automatically be enrolled to Intune.

Enroll a Windows 10 Device (Image Credit: Russell Smith)

Enroll a Windows 10 Device (Image Credit: Russell Smith)

Enroll a Windows 10 Device (Image Credit: Russell Smith)

Enroll a Windows 10 Device (Image Credit: Russell Smith)

Sponsored

In this article, I showed you how to set up automatic device enrollment in Microsoft Intune, and how to enroll and Windows 10 device.

The post Microsoft Intune: Windows 10 Device Enrollment appeared first on Petri.

Posted on in category News

Rackspace 2017 Predictions: VMware in 2017 – Multiple Clouds Will Need More Experts

The content below is taken from the original (Rackspace 2017 Predictions: VMware in 2017 – Multiple Clouds Will Need More Experts), to continue reading please visit the site. Remember to respect the Author & Copyright.

Virtualization and Cloud executives share their predictions for 2017. Read them in this 9th annual VMblog.com series exclusive. Contributed by… Read more at VMblog.com.

Posted on in category News

Self-Serve Cloud Tools for Beginners Hit the Market

The content below is taken from the original (Self-Serve Cloud Tools for Beginners Hit the Market), to continue reading please visit the site. Remember to respect the Author & Copyright.

Brought to you by MSPmentor

A pair of newly released products aim to allow users with no technical knowledge to quickly spin up virtual servers and leverage public cloud services with the simplicity of using a smartphone.

Amazon Web Services’ “Lightsail,” and “Cloud With Me,” a tool developed by a Dublin, Ireland-based AWS partner, suggest that available technology has reached a point where average consumers can now access public cloud directly from vendors.

Lightsail launched on Nov. 30 in northern Virginia and will be rolled out gradually to other regions across the country and worldwide. No firm dates have been announced.

Cloud With Me hit general availability today.

“Our solution allows you to adopt AWS in minutes with zero resources or tech knowledge,” said an entry on the features page of the Cloud With Me website. “And for those who want to connect to AWS directly, our Self Hosting option provides a quick and simple step-by-step guide to help you launch your AWS server in minutes.”

The Lightsail product is touted as a way to leverage the power, reliability and security of AWS public cloud, with the simplicity of a virtual private server.

“As your needs grow, you will have the ability to smoothly step outside of the initial boundaries and connect to additional AWS database, messaging, and content distribution services,” AWS Chief Evangelist Jeff Barr wrote in a blog post. “All in all, Lightsail is the easiest way for you to get started on AWS and jumpstart your cloud projects, while giving you a smooth, clear path into the future.”

A webinar on Lightsail is scheduled for Jan. 17, where the public can receive more information, the blog states.

Both products offer a handful of pre-configured server packages at a flat monthly rate, including DNS management, access to the AWS console, multiple installations and free or premium add-ons.

In addition to being widely available immediately, Cloud With Me officials boast other advantages over the competing product, including out-of-the-box business email, FTP functionality, built-in support for MySQL and intuitive integration with Google Analytics.

Cloud With Me says it plans to expand the tool to integrate with other cloud service providers.

Managed services providers (MSPs) and other channel firms are increasingly tackling the business challenges posed by the explosion of public cloud.

On one hand, migrating and managing cloud workloads and offering strategic IT advice presents potential new revenue opportunities.

At the same time, intense competition by some of tech’s biggest players is flooding the market with cheap cloud computing and innovative self-serve apps and tools.

A recent CompTIA study found that managing the competitive implications of “cloud computing” was the number one concern keeping MSPs up at night.

In another potential threat to the cloud revenue of MSPs, Amazon last week launched AWS Managed Services, which provides a full suite of IT services to large enterprises. Some industry experts have speculated it’s just a matter of time before AWS Managed Services begins to target mid-sized and small organizations.

This article first appeared here, on MSPmentor.

Posted on in category News

Google Cloud Platform icons and sample architectural diagrams, for your designing pleasure

The content below is taken from the original (Google Cloud Platform icons and sample architectural diagrams, for your designing pleasure), to continue reading please visit the site. Remember to respect the Author & Copyright.

Posted by Miles Ward, Global Head of Solutions, Google Cloud Platform

Technology makes more sense when you map it out. That’s why we now have icons and sample architectural diagrams for Google Cloud Platform (GCP) available to download. Using these icons, developers, architects and partners can represent complex cloud designs in white papers, datasheets, presentations and other technical content.

The icons are available in a wide variety of formats, and can be mixed and matched with icons from other cloud and infrastructure providers, to accurately represent hybrid- and multi-cloud configurations. There are icons representing GCP products, diagram elements, services, etc. View them below and at http://bit.ly/2hIP3n8.

We’ll update these files as we launch more products, so please check back.

To give you a flavor, below is one of more than 50 sample diagrams in Slides and Powerpoint. No need to start each diagram from scratch!

Happy diagramming!

Posted on in category News

OpenStack Developer Mailing List Digest December 10- 16

The content below is taken from the original (OpenStack Developer Mailing List Digest December 10- 16), to continue reading please visit the site. Remember to respect the Author & Copyright.

Updates

Allowing Teams Based on Vendor-specific Drivers (cont) [1]

Community Goals for Pike (cont.) [3]

Python changes in OpenStack CI [7]

Golang Technical Requirements [15]

Upgrade readiness check in Nova [11]

Self-service branch management [13]

Architectural discussion about nova-compute interactions [16]

 

[1] http://bit.ly/2h7g2oW

[2] http://bit.ly/2h68irL

[3] http://bit.ly/2h7dpn5

[4] http://bit.ly/2gZunHo

[5] http://bit.ly/2hlkRLB

[6] http://bit.ly/2h63l26

[7] http://bit.ly/2hl9T8M

[8] http://bit.ly/2h69BGU

[9] http://bit.ly/2hlg2Ss

[10] http://bit.ly/2h7iPyi

[11] http://bit.ly/2hlfDzu

[12] http://bit.ly/2h636Ux

[13] http://bit.ly/2hl4b7b

[14] http://bit.ly/2h6iZu6

[15] http://bit.ly/2hl7Qlg

[16] http://bit.ly/2h6dPhZ

Posted on in category News

Video: Installing a Flush AV Backbox with Syncbox

The content below is taken from the original (Video: Installing a Flush AV Backbox with Syncbox), to continue reading please visit the site. Remember to respect the Author & Copyright.

Syncbox

We noticed a new product making a bit of a stir in the custom install world so we asked the makers to tell us a bit about – Syncbox…



When companies are installing Syncbox we make sure that they take pride in doing so because making something look beautiful and elegant requires care and precision which is what we pride ourselves on.

A Syncbox was recently installed in a stunning property in London that is currently being refurbished. The slim line box adds to the property’s fresh new look which is what the homeowner wanted to implement. The homeowner was very impressed with the amount of cover plate styles and finishes, whether it be “Exclusive Metal” or “Personal Plastic”. The metal cover plates are produced by Focus SB who are very popular for their high quality covers that are used in luxury hotels and high-end residential properties. We were able to produce the “Antique Bronze” cover for the owner which looks really nice in the living room.

Syncbox

The work was for a company called AV Innovation who are extremely efficient when it comes to installations. The large number of AV and electrical installers using Syncbox is continuously growing. They can clearly see that it beats the traditional system on time and cost with the added value of its good looks.

Syncbox

Syncbox is now a complete range of products which can be turned into an advanced wiring build consisting of the four main elements: TV, Media, Audio & Power. Customers are able to adapt the Syncbox to their own unique specification and with large projects we can help with the technical drawings.

Syncbox is now being used by the UK’s largest and most prestigious house developers like Crest Nicholson, Redrow & Berkeley Homes. As well as being backed by successful business owner Deborah Meaden for its unique style and endless potential.

Syncbox

Syncbox is the only choice for a professional TV installation, television manufacturers do not supply a recessed power outlet for plugs or even a recessed power outlet to enable their beautiful screens to fit flush to the wall. With the connection cables plugged in, the cables protrude from anything between 35mm and 40mm from the wall and so we created the first recessed power point. With Syncbox all your cable connections are recessed, so your flat screen doesn’t have to sit away from the wall.

The reason why Syncbox is such an in demand product is due to there being so many benefits compared to the traditional system:

  • Recesses all power & TV Mounting
  • Bespoke cover plates
  • Tidy & protected cabling
  • Ultra-flush TV mounting
  • Easy installation
  • Simple one box system
  • World-wide adaptable
  • Save time & money

Prices start from around £55.

sync-box.com

Want More? – Follow us on Twitter, Like us on Facebook, or subscribe to our RSS feed. You can even get these news stories delivered via email, straight to your inbox every day.

Posted on in category News

Grab an ARMful: OpenIO’s Scale-out storage with disk drive feel

The content below is taken from the original (Grab an ARMful: OpenIO’s Scale-out storage with disk drive feel), to continue reading please visit the site. Remember to respect the Author & Copyright.

Grab an ARMful: OpenIO’s Scale-out storage with disk drive feel

Object storage networked drive JBOD with direct access

OpenIO_WD_HDD_with_ARM_board

Disk drive as server; OpenIO SLS-4U96 disk drive nano-node

OpenIO has launched its SLS-4U96 product, a box of 96 directly addressed drives offering an object-storage interface and per-drive scale-out granularity.

The SLS-4U86 is a 4U box holding up to 96 vertically mounted 3.5-inch disk drives, providing up to 960TB of raw storage with 10TB drives and 1,152 TB with 12TB drives. The disk drives are actually nano-servers, nano-nodes in OpenIO terms, as they each have on-drive data processing capability.

A nano-node contains:

  • ARM cpu – Marvell Armada-3700 Dual core Cortex-A53 ARM v8 @1.2Ghz
  • Hot-swappable 10TB or 12TB 3.5-inch SATA nearline disk drive
  • Dual 2.5Gb/s SGMII (Serial Gigabit Media Independent Interface) ports

The amount of DRAM is not known.

The SLS-4U96 product has no controllers or motherboard in its enclosure, featuring dual 6-port 40Gbit/s Ethernet backend Marvell Prestera switches for access. These are for both client connectivity and direct chassis interconnect, which can scale up to more than 10PB per rack. The chassis has four x N+1 power supplies and 5 removable fan modules, and OpenIO says it has no single point of failure.

OpenIO_WD_HDD_with_ARM_board

OpenIO SLS-4U96 disk drive with ARM board

The failure domain is a single drive which, with in-chassis erasure coding, is survivable. New or replaced nano-nodes join the SLS resource pool without needing a rebalancing operation and the process takes less than 60 seconds and doesn’t impact performance.

SLS stands for server-less storage by the way.

OpenIO CEO Laurent Denel said: “The SLS-4U96 hardware appliance revolutionises the storage landscape by providing network HDDs as an industrial reality.” He claims that with a cost that can be as low as $0.008/GB/month over 36 months for a fully populated configuration and the ease of use of our software, a single SysAdmin can easily manage a large multi-petabytes environment at the lowest TCO.

SLS_4u96_Partially_populated

Partially populated SLS-4U96

The open source SDS software used by SLS has these features;

  • Automatic nano-node discovery, setup and load balancing
  • Easy to use management via a web GUI, CLI and API
  • Local and geo-distributed object replica or erasure coding
  • Quick fault detection and recovery
  • Call-home support notifications
  • S3, Swift and Native object APIs
  • Multiple file sharing access methods: NFS, SMB, FTP, FUSE

It is said to be fully compatible with existing x86 based SDS installations and a 3-node cluster can be deployed ready for use in five minutes. It can be a mixed hardware cluster as well

SLS_4u96_nano_nodes

OpenIO SLS-4u96 nano-nodes ready to be stuffed into the chassis

Okay, very good, but what is this storage for? OpenIO suggests email, video storage and processing in the media and entertainment industry, and enterprise file services. It says there are “dedicated applications connectors such an Email connector for Cyrus, Zimbra, Dovecot and Video connector to sustain high demanding content service around adaptive streaming and event-based transcoding.”

There’s no performance data available yet. Hopefully that will come out in the next few months.

Grab yourself a data sheet here; it’s more of a brochure actually. Get more technical white papers here. ®

Sponsored:
Customer Identity and Access Management

Posted on in category News

AWS Webinars – January 2017 (Bonus: December Recap)

The content below is taken from the original (AWS Webinars – January 2017 (Bonus: December Recap)), to continue reading please visit the site. Remember to respect the Author & Copyright.

Have you had time to digest all of the announcements that we made at AWS re:Invent? Are you ready to debug with AWS X-Ray, analyze with Amazon QuickSight, or build conversational interfaces using Amazon Lex? Do you want to learn more about AWS Lambda, set up CI/CD with AWS CodeBuild, or use Polly to give your applications a voice?

January Webinars
In our continued quest to provide you with training and education resources, I am pleased to share the webinars that we have set up for January. These are free, but they do fill up and you should definitely register ahead of time. All times are PT and each webinar runs for one hour:

January 16:

January 17:

January 18::

January 19:

January 20

December Webinar Recap
The December webinar series is already complete; here’s a quick recap with links to the recordings:

December 12:

December 13:

December 14:

December 15:

Jeff;

PS – If you want to get a jump start on your 2017 learning objectives, the re:Invent 2016 Presentations and re:Invent 2016 Videos are just a click or two away.

Posted on in category News

Easy Ways to Motivate Yourself to Work When You’re Really Not Feeling It

The content below is taken from the original (Easy Ways to Motivate Yourself to Work When You’re Really Not Feeling It), to continue reading please visit the site. Remember to respect the Author & Copyright.

We all have those days where we’re really not feeling it but we have to get some work done anyway. Whether that’s today or every day, this graphic offers a few tips to help you get energized and tackle your to-do list, project, or drudgework with vigor.

Read more…

Posted on in category News

Let Your Whole Family Watch This Internet Security Basics Course

The content below is taken from the original (Let Your Whole Family Watch This Internet Security Basics Course), to continue reading please visit the site. Remember to respect the Author & Copyright.

As the holidays get closer, you’re probably going to spend a lot of time with your family, many of whom will get shiny new devices. If they want your help setting up their new toys, consider making this internet security course required viewing.

Read more…

Posted on in category News

Experts Expose Myths, Offer Best Practices for Office 365 Data Protection

The content below is taken from the original (Experts Expose Myths, Offer Best Practices for Office 365 Data Protection), to continue reading please visit the site. Remember to respect the Author & Copyright.

Eran Fajaran<br/>Asigra

Eran Fajaran

Asigra

Eran Farajun is the Executive Vice President for Asigra.

For many organizations, Microsoft Office 365 has become the essential cloud-based productivity platform. According to Microsoft public filings, it’s used by four out of five Fortune 500 companies, and at the other end of the scale, more than 50,000 small and medium sized companies sign up for the service every month. Its subscriber base grew nearly 80 percent in a 12-month period ending Q3 2016.

However, for many corporate subscribers, Office 365’s popularity and convenience may obscure a critical data retention and compliance requirement: the need for users to take responsibility for protecting their own data in cloud-based platforms such as Microsoft Office 365. While it is a highly secure platform, there is a lot more to comprehensive data protection than encryption and hard passwords.

To learn more about the importance of protecting data in cloud-based platforms, I asked three data protection professionals to join me for a discussion exploring why protection of Office 365 data is mission critical. Accompanying me on the panel were Chad Whaley, CEO of Echopath, an IT services and data backup company based in Indiana; James Chillman, managing director of UK Backup, a provider of cloud backup and disaster recovery services in England; and Jesse Maldonado, director of project services at Centre Technologies, an IT solutions provider out of Texas.

I began by asking the panel to identify the top myths about data protection they encounter when talking to customers about Microsoft Office 365.

Chillman: The top misunderstanding we encounter is that people assume that, by signing up for Office 365, Microsoft has now taken charge of their data. However, that’s not true. Microsoft is responsible for running the service and keeping it secure. They do a great job and aren’t going to destroy your data. However, users are still responsible for managing their data and protecting it from threats such as accidents, malicious behavior and ransomware attacks.

Maldonado: We often run into the perception that Office 365 data is not mission critical, and that only data from enterprise resource planning (ERP) solutions or other line-of-business applications need to be protected. That’s simply not the case. Office 365 is at the heart of business communication, and particularly for organizations with compliance requirements, the data created and stored in Office 365 is vital and must be protected.

Whaley: Many customers are drawn to Office 365 by the potential cost savings, but are surprised to find that there are still costs associated with storing data in the cloud. It’s still your data, whether it’s in your data center or Microsoft’s cloud, and if you want to ensure it’s protected, you will need to have a data protection plan. The fact that you have to manage your data doesn’t change.

Farajun: What consequences have your customers experienced due to insufficient protection of Office 365 data?

Chillman: We’re seeing a huge increase in the number of restores due to ransomware attacks—it’s our main area of focus when it comes to retrieving client data. The consequences of ransomware are very serious, including the cost of downtime, loss of earnings and potential fines from breaking data protection laws. We’ve had customers who believe moving data to Office 365 protects their data from ransomware. But that’s not true. If ransomware has infected your data center and you sync to Office 365, then the ransomware can spread to your cloud-based data too. Microsoft does its best to protect against malware but ransomware is becoming much more advanced and it changes every day. It’s a huge problem.

Whaley: I was looking at a study of unscheduled downtime, and found that two factors – human error and software malfunction—accounted for 40 percent of all downtime. Moving your data to Office 365 doesn’t do anything to change these threats. Human error is still very prevalent, like the proverbial Bob in Accounting who deletes all of his data and doesn’t notice for 45 days, at which point it’s gone. The largest restore we’ve ever done was due to an admin who didn’t use Office 365 properly and ended up purging a massive amount of data. Human error is still very much at the forefront of downtime risks and you have to protect against it. As for software, whether it’s on premises or in the cloud, it’s still Microsoft Office and it’s susceptible to the same glitches in either location.

Maldonado: Without comprehensive data protection, data can be lost or destroyed just as easily in the cloud as in the data center. If a Word document disappears and has to be recreated from the ground up, a company will lose productivity. We’ve seen instances where data loss events have led to organizations going out of business—they were never able to recover from the data loss.

Farajun: What considerations and best practices do you recommend to your customers when discussing Office 365 data protection?

Chillman: We make sure that our customers understand the core data protection capabilities built into Office 365. Then we look at how to address the gaps. We work with customers to define service-level agreements to determine what data retention policies they need for their particular business requirements. We also make sure customers understand that they are still ultimately responsible for their data in the cloud. You need to make sure your data protection solution gives you the power and flexibility to manage it effectively.

Maldonado: We find that a lot of customers haven’t defined the Recovery Time Objective (RTO) or Recovery Point Objective (RPO) for their business, so we help them determine their tolerance for data loss. We also help them understand what data retention requirements they must comply with due to regulation. For instance, healthcare and financial organizations have strict guidelines about what data must be stored and for how long.

Whaley: For Office 365 data protection, the best practice we recommend is to plan your solution before you move your data there. For many businesses, data protection is an afterthought. We recommend that our customers get to know their data, understand what’s critical and what’s not, and make sure they realize, whether it’s in the cloud or on premises, that they are ultimately responsible for it.

Farajun: In conclusion, I would add that Microsoft Office 365 offers great simplicity and cost savings for businesses seeking to place their productivity tools in the cloud. However, email and document retention requirements still apply and must be followed regardless of where your data is stored. Microsoft Office 365 provides basic data recovery and archiving capabilities, but this elemental level of protection may not satisfy your compliance obligations. To mitigate your risk and meet compliance mandates, protect your Office 365 data the same way you would protect your on-premise data to avoid data loss as a result of intentional or accidental user error, ransomware attacks, unplanned data overwrites or other breaches. This requires a comprehensive approach to data protection that protects all enterprise data from any source, including Office 365, with a single, easily managed solution.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

Posted on in category News

Disaster Recovery using Amazon Web Services (AWS)

The content below is taken from the original (Disaster Recovery using Amazon Web Services (AWS)), to continue reading please visit the site. Remember to respect the Author & Copyright.

“You can’t predict a disaster, but you can be prepared for one!” Disaster recovery is one of the biggest challenges for infrastructure. Amazon Web Services allows us to easily tackle this challenge and ensure business continuity. In this post, we’ll take a look at what disaster recovery means, compare traditional disaster recovery versus that in the cloud, and explore essential AWS services for your disaster recovery plan.

What is Disaster Recovery?

There are several disaster scenarios that can impact your infrastructure. These include natural disasters such as an earthquake or fire, as well as those caused by human error such as unauthorized access to data, or malicious attacks.

“Any event that has a negative impact on a company’s business continuity or finances could be termed a disaster.”

In any case, it is crucial to have a tested disaster recovery plan ready. A disaster recovery plan will ensure that our application stays online no matter the circumstances. Ideally, it ensures that users will experience zero, or at worst, minimal issues while using your application.

If we’re talking about on-premise centers, a disaster recovery plan is expensive to maintain and implement. Often, such plans are insufficiently tested or poorly documented. As such, it’s adequate for protecting resources. More often than not, companies with a good disaster recovery plan aren’t capable of conducting it because it was never tested in a real environment. As a result, users cannot access the application and the company suffers significant losses.

Let’s take a closer look at some of the important terminology associated with disaster recovery:

Business Continuity. All of our applications require Business Continuity. Business Continuity ensures that an organization’s critical business functions continue to operate or recover quickly despite serious incidents.

Disaster Recovery. Disaster Recovery (DR) enables recovery or continuation of vital technology infrastructure and systems following a natural or human-induced disaster.

RPO and RTO. Recover Point Objective (RPO) and Recovery Time Objective (RTO) are the two most important parts of a good DR plan for our workflow. Recover Point Objective (RPO) is the maximum targeted period in which data might be lost from an IT service due to a major incident. Recovery Time Objective (RTO) is a targeted time period after which a business process must be restored after a disaster or disruption to service.

Recovery Point Objective (RPO) and Recovery Time Objective (RTO)

Recovery Point Objective (RPO) and Recovery Time Objective (RTO)

Traditional Disaster Recovery plan (on-premise)

A traditional on-premise Disaster Recovery plan often includes a fully duplicated infrastructure that is physically separate from the infrastructure that contains our production. In this case, an additional financial investment is required to cover expenses related to hardware and for maintenance and testing. When it comes to on-premise data centers, physical access to the infrastructure is often overlooked.

These are the security requirements for an on-premise data center disaster recovery infrastructure:

  • Facilities to house the infrastructure, including power and cooling.
  • Security to ensure the physical protection of assets.
  • Suitable capacity to scale the environment.
  • Support for repairing, replacing, and refreshing the infrastructure.
  • Contractual agreements with an internet service provider (ISP) to provide internet connectivity that can sustain bandwidth utilization for the environment under a full load.
  • Network infrastructure such as firewalls, routers, switches, and load balancers.
  • Enough server capacity to run all mission-critical services. This includes storage appliances for the supporting data, and servers to run applications and backend services such as user authentication, Domain Name System (DNS), Dynamic Host Configuration Protocol (DHCP), monitoring, and alerting.

Obviously, this kind of disaster recovery plan requires large investments in building disaster recovery sites or data centers (CAPEX). In addition, storage, backup, archival and retrieval tools, and processes (OPEX) are also expensive. And, all of these processes, especially installing new equipment, take time.

An on-premise disaster recovery plan can be challenging to document, test, and verify, especially if you have multiple clients on a single infrastructure. In this scenario, all clients on this infrastructure will experience problems with performance even if only one client’s data is corrupted.

Disaster Recovery plan on AWS

There are many advantages of implementing a disaster recovery plan on AWS.

Financially, we will only need to invest a small amount in advance (CAPEX), and we won’t have to worry about the physical expenses for resources (for example, hardware delivery) that we would have in on an “on-premise” data center.

AWS enables high flexibility, as we don’t need to perform a failover of the entire site in case only one part of our application isn’t working properly. Scaling is fast and easy. Most importantly, AWS allows a “pay as you use” (OPEX) model, so we don’t have to spend a lot in advance.

Also, AWS services allow us to fully automate our disaster recovery plan. This results in much easier testing, maintenance, and documentation of the DR plan itself.

This table shows the AWS service equivalents to an infrastructure inside an on-premise data center.

On premise data center infrastructure AWS Infrastructure
DNS Route 53
Load Balancers ELB/appliance
Web/app servers EC2/Auto Scaling
Database servers RDS
AD/authentication AD failover nodes
Dana centers Availability Zones
Disaster recovery Multi-region

 

Essential AWS Services for Disaster Recovery

While planning and preparing a DR plan, we’ll need to think about the AWS services we can use. Also, we need to understand our selected services support data migration and durable storage. These are some of the key features and services that you should consider when creating your Disaster Recovery plan:

AWS Regions and Availability Zones  –  The AWS Cloud infrastructure is built around Regions and Availability Zones (“AZs”). A Region is a physical location in the world that has multiple Availability Zones. Availability Zones consist of one or more discrete data centers, each with redundant power, networking, and connectivity housed in separate facilities. These AZs allow you to operate production applications and databases that are more highly available, fault tolerant, and scalable than would be possible from a single data center.

Amazon S3Provides a highly durable storage infrastructure designed for mission-critical and primary data storage. Objects are redundantly stored on multiple devices across multiple facilities within a region and are designed to provide a durability of 99.999999999% (11 9s).

Amazon Glacier – Provides extremely low-cost storage for data archiving and backup. Objects are optimized for infrequent access, for which retrieval times of several hours are adequate.

Amazon EBS –  Provides the ability to create point-in-time snapshots of data volumes. You can use the snapshots as the starting point for new Amazon EBS volumes. And, you can protect your data for long-term durability because snapshots are stored within Amazon S3.

AWS Import/Export – Accelerates moving large amounts of data into and out of AWS by using portable storage devices for transport. The AWS Import/Export service bypasses the internet and transfers your data directly onto and off of storage devices using Amazon’s high-speed internal network.

AWS Storage Gateway is a service that connects an on-premise software appliance with cloud-based storage. This provides seamless, highly secure integration between your on-premise IT environment and the AWS storage infrastructure.

Amazon EC2 Provides resizable compute capacity in the cloud. In the context of DR, the ability to rapidly create virtual machines that you can control is critical.

Amazon EC2 VM Import Connector enables you to import virtual machine images from your existing environment to Amazon EC2 instances.

Amazon Route 53 is a highly available and scalable Domain Name System (DNS) web service.

Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances.

Amazon VPC allows you to provision a private, isolated section of the AWS cloud. Here,  you can launch AWS resources in a virtual network that you define.

Amazon Direct Connect makes it easy to set up a dedicated network connection from your premises to AWS.

Amazon RDS makes it easy to set up, operate, and scale a relational database in the cloud.

AWS CloudFormation gives developers and systems administrators an easy way to create a collection of related AWS resources and provision them in an orderly and predictable fashion. You can create templates for your environments and deploy associated collections of resources (called a stack) as needed.

Disaster Recovery Scenarios with AWS

There are several strategies that we can use for disaster recovery of our on-premise data center using AWS infrastructure:

  • Backup and Restore
  • Pilot Light
  • Warm Standby
  • Multi-Site

Backup and Restore

The Backup and Restore scenario is an entry level form of disaster recovery on AWS. This approach is the most suitable one in the event that you don’t have a DR plan.

In on-premise data centers, data backup would be stored on tape. Obviously it will take time to recover data from tapes in the event of a disaster. For Backup and Restore scenarios using AWS services, we can store our data on Amazon S3 storage, making them immediately available if a disaster occurs. If we have a large amount of data that needs to be stored on Amazon S3, ideally we would use AWS Export/Import or even AWS Snowball to store our data on S3 as soon as possible.

AWS Storage Gateway enables snapshots of your on-premise data volumes to be transparently copied into Amazon S3 for backup. You can subsequently create local volumes or Amazon EBS volumes from these snapshots.

Backup and Restore scenario

Backup and Restore scenario

The Backup and Restore plan is suitable for lower level business-critical applications. This is also an extremely cost-effective scenario and one that is most often used when we need backup storage. If we use a compression and de-duplication tool, we can further decrease our expenses here. For this scenario, RTO will be as long as it takes to bring up infrastructure and restore the system from backups. RPO will be the time since the last backup.

Pilot Light

The term “Pilot Light” is often used to describe a DR scenario where a minimal version of an environment is always running in the cloud. This scenario is similar to a Backup and Restore scenario. For example, with AWS you can maintain a Pilot Light by configuring and running the most critical core elements of your system in AWS. When the time comes for recovery, you can rapidly provision a full-scale production environment around the critical core.

Pilot light scenario

Pilot Light scenario

A Pilot Light scenario is suitable for solutions that require a lower RTO and RPO. This scenario is a mid-range cost DR solution.

Warm Standby

A Warm Standby scenario is an expansion of the Pilot Light scenario where some services are always up and running. As we plan a DR plan, we need to identify crucial points of our on-premise infrastructure and then duplicate it inside the AWS. In most cases, we’re talking about web and app servers running on a minimum-sized fleet. Once a disaster occurs, infrastructure located on AWS takes over the traffic and performs its scaling and converting to a fully functional production environment with minimal RPO and RTO.

Warm standby scenario

Warm standby scenario

The Warm Standby scenario is more expensive than Backup and Restore and Pilot Light because in this case, our infrastructure is up and running on AWS. This is a suitable solution for core business-critical functions and in cases where RTO and RPO need to be measured in minutes.

Multi-Site

The Multi-Site scenario is a solution for an infrastructure that is up and running completely on AWS as well as on an “on-premise” data center. By using the weighted route policy on Amazon Route 53 DNS, part of the traffic is redirected to the AWS infrastructure, while the other part is redirected to the on-premise infrastructure.

Data is replicated or mirrored to the AWS infrastructure.

Multi site scenario

Multi-Site scenario

In a disaster event, all traffic will be redirected to the AWS infrastructure. This scenario is also the most expensive option, and it presents the last step toward full migration to an AWS infrastructure. Here, RTO and RPO are very low, and this scenario is intended for critical applications that demand minimal or no downtime.

Wrap up

There are many options and scenarios for Disaster Recovery planning on AWS.

The scope of possibilities has been expanded further with AWS’ announcement of its strategic partnership with VMware. Thanks to this partnership, users can expand their on-premise infrastructure (virtualized using VMware tools) to AWS, and create a DR plan via resources provided by AWS using VMware tools that they are already accustomed to using.

Don’t allow any kind of disaster to take you by surprise. Be proactive and create the DR plan that best suits your needs.

Currently working as CLOUDWEBOPS OÜ AWS Consultant as well as Data Architect/DBA and DevOps @WizardHealth. Proud owner of AWS Solutions and DevOps Associate certificates. Besides databases and cloud computing, interested in automatization and security. Founder and co-organizer of the first AWS User Group in Bosnia and Herzegovina .

More Posts

Posted on in category News

Deploy Domain Controllers as Azure Virtual Machines

The content below is taken from the original (Deploy Domain Controllers as Azure Virtual Machines), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft-Azure-stack-hero

Microsoft-Azure-stack-hero

This guide will show you how to deploy an Azure virtual machine as a domain controller (DC).

Extending Your Domain

There are many reasons why you would extend your existing domain into the cloud, including:

 

 

To extend an existing domain you will need:

Afterwards, you will use AD Sites and Services to create:

Specify Your Domain Controller Virtual Machines

For all but the Fortune 1000s, DCs are usually lightweight machines that do nothing other than DNS, authentication, and authorization. For this reason, I go with the cheapest option for virtual machines in Azure, the Basic A-series. The 300 IOPS for a data disk limit doesn’t impact AD performance, and the lack of load balancer support (NAT rules for RDP and load balancing rules) doesn’t hurt either because domain controllers shouldn’t be visible on the Internet.

The sizing of the machines depends on the memory requirements. In my small deployments, I’ve opted for a Basic A2 (to run Azure AD Connect for small labs or businesses) and a Basic A1 as the alternative machine; memory requirements for Azure AD Connect depend on the number of users being replicated to Azure AD.

Larger businesses should use empirical data on resource utilization of their DCs to size Azure virtual machines. Maybe an F-Series, or a Dv2-Series would suit, or in extreme scenarios, maybe they’ll need “S” machines that support Premium Storage (SSD) accounts for data disks.

Building the Domain Controller

There are a couple of things to consider when deploying a new Azure virtual machine that will be a DC. You should deploy at least 2 DCs. To keep your domain highly available, during localized faults or planned maintenance, create the machines in an availability set. Note that Azure Resource Manager (ARM) currently won’t allow you to add a virtual machine to an availability set after creating the machine.

Creating new domain controllers in an Azure availability set [Image Credit: Aidan Finn]

Creating new domain controllers in an Azure availability set [Image Credit: Aidan Finn]

You must store all AD Directory Services (DS) files on a non-caching data disk to be supported and to avoid USN rollbacks. Once the machine is created, open the settings of the machine in the Azure Portal, browse to Disks, and click Attach New.

Sponsored

Give the disk a name that is informative, size the disk, and make sure that host caching is disabled (to avoid problems and to be supported).

Add a data disk to the Azure domain controller [Image Credit: Aidan Finn]

Add a data disk to the Azure domain controller [Image Credit: Aidan Finn]

Once the machine is deployed, log into it and launch Disk Management. Bring the new data disk online and format the new disk with an NTFS volume.

The disks of a virtual DC in Azure [Image Credit: Aidan Finn]

The disks of a virtual DC in Azure [Image Credit: Aidan Finn]

Note that Standard Storage (HDD) accounts only charge you for data stored within the virtual hard disk, not for the size of the disk. Azure Backup does charge an instance fee based on the total size of disks, but going from 137GB (the OS disk) to 237GB (with a 100GB data disk) won’t increase this fee (the price band is 50-500GB).

Static IP Address

A DC must have a static IP address. Do not edit the IP configuration of the virtual machine in the guest OS. Bad things will happen and careers will be shortened. Ignore any errors you’ll see later about DHCP configurations; your virtual machine will have a static IP address, but using the method that is supported in Azure.

Using the Azure Portal, identify the NIC of your DC and edit its settings. Browse to IP Configurations. Click the IP Configuration; here you’ll see the Azure-assigned IP configuration of this NIC. Change the assignment from Dynamic to Static. You can reuse the already assigned IP address or you can enter a new (unused) one that is valid for the subnet. Not the IP address for later.

azuredcstaticip

Configuring a static IP address for the Azure domain controller [Image Credit: Aidan Finn]

Virtual Network DNS

AD admins know now that things can vary. If your first DC in Azure is joining an on-premises domain, then you will:

  1. Temporarily configure the VNet to use the IP addresses of 1 or more on-premises DCs as DNS server.
  2. Perform the first DC promotion.
  3. Reset the VNet DNS settings to use the in-Azure DCs as DNS servers.

In this lab, I’m building a new/isolated domain, so I will simply edit the DNS settings of the VNet to use the new static IP address of my DC virtual machine.

Open the settings of the virtual network and browse to DNS Servers. Change the option from Default (Azure-Provided) to Custom, and enter the IP address(es) of the machine(s) that will be your DCs.

Configuring a static IP address for the Azure domain controller [Image Credit: Aidan Finn]

Configuring a static IP address for the Azure domain controller [Image Credit: Aidan Finn]

This option, along with the default gateway of the subnet and the static IP address of the machine’s NIC, will form the static IP configuration of your DC(s).

Promote the Domain Controller

Log into your DC, add the Active Directory Domain Services (AD DS) role, and start the DC promotion. Continue as normal until you get to the Paths screen; this is where you will instruct the AD DS configuration wizard to store the AD files on the data disk (F: in this case) instead of the %systemroot% (normally C:).

Change the AD DS paths to use an Azure data disk [Image Credit: Aidan Finn]

Change the AD DS paths to use an Azure data disk [Image Credit: Aidan Finn]

Complete the wizard. You will see a warning in the Prerequisites Check screen, and probably later in the event logs about the DC having a DHCP configuration – remember that the guest OS must be left with a DHCP configuration, and you have configured a static IP configuration in the Azure fabric.

Ignore the warning about a DHCP configuration in the Azure machine’s guest OS [Image Credit: Aidan Finn]

Ignore the warning about a DHCP configuration in the Azure machine’s guest OS [Image Credit: Aidan Finn]

You can complete the wizard to get your DC functional.

If you are extending your on-premises domain, remember to change the DNS settings of your VNet after verifying in Event Viewer that the DC is fully active after a first complete sync of the AD databases and SYSVOL.

Active Directory Sites and Services

The final step in the build process is to ensure that the Active Directory topology is modified or up-to-date. New subnets (the network address of your virtual network) should be added to AD in Active Directory Sites and Services. Sites should be created and the subnets should be added to those sites. And new IP inter-site transports should be added to take control of replication paths and intervals between any on-premises sites and your in-Azure site(s).

My simple Azure-based domain’s topology [Image Credit: Aidan Finn]

My simple Azure-based domain’s topology [Image Credit: Aidan Finn]

Sponsored

As usual, make sure that you test AD and SysVol replication between and inside of sites, verify that DNS is running, and that the AD replication logs are clear.

The post Deploy Domain Controllers as Azure Virtual Machines appeared first on Petri.

Posted on in category News

Remotely Monitor a Raspberry Pi To See What’s Running and Get Notifications If Something Goes Wrong

The content below is taken from the original (Remotely Monitor a Raspberry Pi To See What’s Running and Get Notifications If Something Goes Wrong), to continue reading please visit the site. Remember to respect the Author & Copyright.

If you’re running a Raspberry Pi that’s doing something in the background, like working as a security camera system or a weather station, then it’s good to know exactly what it’s up to no matter where you are. Initial State shows off how to build a dashboard that keeps you up to date and notifies you if anything goes wrong.

Read more…

Posted on in category News

Office 365 Mailbox Quotas Swelling to 100 GB

The content below is taken from the original (Office 365 Mailbox Quotas Swelling to 100 GB), to continue reading please visit the site. Remember to respect the Author & Copyright.

Exchange Online and Outlook slider

Microsoft Stays Quiet but Office 365 Roadmap Reveals All

Microsoft hasn’t said anything about increasing the default quota for Exchange Online mailboxes from the previous 50 GB limit, so it came as a surprise when the Office 365 Roadmap announced that an increase was on the way (Figure 1).

Office 365 roadmap

Figure 1: The Office 365 Roadmap announces the change (image credit: Tony Redmond)

The last increase occurred in August 2013 when Microsoft upped mailbox quotas from 25 GB to 50 GB.

You might wonder why Microsoft is increasing mailbox quotas within Exchange Online. After all, only relatively few individuals need more than 50 GB. Well, storage is cheap, especially when bought in the quantities that Microsoft purchases to equip hundreds of thousands of Office 365 servers. And because storage is cheap, Microsoft is able to offer sufficient to users to enable them to keep all their data online.

It’s also a competitive advantage when Office 365 provides 100 GB mailboxes and Google’s G Suite is limited to 30 GB (shared between Gmail, Google Drive, and Google Photos)

Apart from anything else, storing data online makes sure that it is indexed, discoverable, and comes under the control of the data governance policies that can you can apply within Office 365.

In particular, keeping data online is goodness because it means that users don’t have to stuff information into PST files. PSTs are insecure, prone to failure, invisible for compliance purposes, and a throwback to a time when storage was expensive and mailbox quotas small. Given the size of online quotas available today, there’s really no excuse for Office 365 tenants to tolerate PST usage any more. It’s time to find, remove, and ingest user PSTs via the Office 365 Import Service or a commercial product like QUADROtech’s PST FlightDeck.

Rolling Out to Exchange Online

According to the roadmap, the new 100 GB limit is “rolling out”. However, I have not yet seen an increase in my tenant. When the upgrade happens, any mailbox that has not been assigned a specific quota will receive the increase. In other words, an administrator has not changed the quotas for the mailbox, usually to reduce the limit.

To change a mailbox quota, you might expect to use the Office 365 Admin Center or Exchange Online Administration Center (EAC). The normal path to editing settings is to select a user, find what you need to change, and do it. In this case, you access Exchange properties under Mail Settings in the Office 365 Admin Center or select the mailbox in EAC. Either way, you’ll end up with the screen shown in Figure 2.

Edit Exchange Online mailbox quota

Figure 2: Editing Exchange Online mailbox quotas (image credit: Tony Redmond)

The problem is that there is no way to amend mailbox quotas here. Clicking the Learn More link brings us to a page that tells us that a More options link should be available to allow the mailbox quotas to be updated. The page does say that it relates to Exchange 2013 so it’s annoying to find it displayed when working with Exchange Online.

A further search locates a knowledge base article that recommends using PowerShell to set Exchange Online mailbox limits. The logic is that most administrators are likely to leave mailbox quotas alone (that’s one of the reasons to make the quotas so large), so why clutter up the GUI with unnecessary options.

I’m pretty Exchange-literate so being brought from page to page to discover how to perform a pretty simple task doesn’t disturb me too much, but it’s not a good experience for a new administrator.

PowerShell Does the Trick

PowerShell is often the right way to perform a task inside Office 365. In this case, the Set-Mailbox cmdlet can be used to update the three quota settings that determine how a mailbox behaves. This example shows how to set mailbox quotas:

[PS] C:\> Set-Mailbox -Identity TRedmond -ProhibitSendQuota 75GB -ProhibitSendReceiveQuota 80GB -IssueWarningQuota 73GB

Logically, warnings should sound before limits cut in to stop users doing work. A gap of a gigabyte or two between warning and limit should be sufficient for a user to take the hint and either clean out their mailbox or request an increased limit. A well-designed retention policy also helps as it can remove old items without user intervention to keep mailboxes under quota.

New Limits for Some Plans

The new mailbox quotas will only apply to the Office 365 E3 and E5 plans. Other plans will remain with the 50 GB quota as described in the Exchange Online limits page (which hasn’t yet been updated to reflect this change).

Consider Before You Fill

Having a large mailbox can be an advantage. It can also create some challenges. Search is much better today than ever before, but looking for a particular item can still sometimes be like looking for the proverbial needle in the haystack. That’s why I delete items I know I don’t need. Or think I don’t need (Recoverable Items save the day).

More importantly, if use the Outlook desktop client, consider how much data you want to cache locally and how well the hard disk on your PC will cope with the size of that cache (the OST file). PCs equipped with fast SSDs usually perform well up to the 10 GB mark and slow thereafter. PCs with slow-spinning 5,400 rpm hard drives will pause for thought well beforehand.

Sponsored

The solution? Use the Outlook “slider” to restrict the amount of data synchronized to the cache. Outlook 2016 allows you to store just three days of mail (suitable for desktop virtualization projects) up to “All”. Setting the slider to a year or so is reasonable for most people. That is, unless you absolutely insist on caching all of your mailbox. If so, invest in fast disks.

Follow Tony on Twitter @12Knocksinna.

Want to know more about how to manage Office 365? Find what you need to know in “Office 365 for IT Pros,” the most comprehensive eBook covering all aspects of Office 365. Available in PDF and EPUB formats (suitable for iBooks) or for Amazon Kindle.

The post Office 365 Mailbox Quotas Swelling to 100 GB appeared first on Petri.

Posted on in category News

List of websites to download old version software for Windows

The content below is taken from the original (List of websites to download old version software for Windows), to continue reading please visit the site. Remember to respect the Author & Copyright.

While it is always recommended to have the latest an upgraded version of software, we sometimes might need to use the older version. Probably when the upgraded version is not compatible with your Windows PC or when you don’t really like the upgraded features and UI or maybe even when te software has gone Paid! Usually, the developers delete the older versions or replace them with the upgraded versions of software but thankfully there are some websites which help you download the old version of software. Here in this post, we will discuss the five best websites to download old version software for Windows.

Download old version software

1. Oldversion.com Download old version software

Running since 2001, this website has an extensive collection of old software, both for Windows, Linux, Android as well as Mac. More than 2800 versions of 190 software are listed here in proper categories. Furthermore, there is also a search box where you can search for the desired program in no time. The site also has its own forum where you can post your query about the software and the versions required.

You can browse the software by categories or even alphabetically. Both, current as well as the older version of programs are available for download. This is one of the best websites to download the older version of software. Check it here.

2. Oldware.org

This is again a well-organized website offering the old version of popular Windows software. The extensive list includes around 2400 programs. All programs here are displayed alphabetically, and there is also a quick jump option where you can select the desired option from the drop-down menu. Almost every program is verified by the website author.

A simple user interface and the alphabetically organized list of software makes this website worth adding in the list of best websites to download old version software for Windows. The homepage also shows the latest ten files added and the most popularly downloaded programs. Just click on any program and download the version you need. Check it here.

3. OldApps.com

A detailed website with a proper categorization of software and its various versions available for Windows, Mac, and Linux. The home page shows it all. Just go to the desired category and select the program you want to download. The wide range of categories include- browsers, messengers, file sharing programs and lot more. Click, and you can see various versions available for free download. The website shows the release date of the program, the size of the setup file, and supported operating systems.

You will probably get to see the oldest versions of most of the programs listed here. Tabs like ‘Recently added apps’ and ‘Apps for Windows’ and ‘Most Downloaded Apps’ gives you quick access to the programs.While, there is a Community page too on the website, but it seems to be down currently. You can also use the search tab if you go directly to the program you want to download. Check it here.

4. Last Freeware Version

This website enlists the old versions of almost every popular program, but the interface is a bit clumsy if compared to the other download websites mentioned above. You need some time to get accustomed to the interface and then search for the program you need.

The software programs here are neither listed alphabetically nor category-wise. But, the plus point here is that it lists out the free versions of some really good programs that are now available only as paid versions. Visit 321download.com.

5. PortableApps.com

This website basically provides you the latest versions of software, but it simply lists the older versions too. Huge collection of popular software programs includes more than 300 real portable apps, with no bundleware or shovelware.

The website has its own support forum where you can post your query and get help. As the name suggests, the website offers all portable apps which you can carry on your cloud drive or a portable device. May it be your favorite games, photo editing software, Office apps, Media Player apps, utilities or more, the website offers you all. In short, it is a platform offering all portable apps tied together.

Always visit safe software download sites to download your software and never click on Next, Next blindly. Opt out of 3rd party offers and avoid getting Potentially Unwanted Programs installed on your computer.



Posted on in category News

Rugged module runs Yocto Linux on up to 12-core Xeon-D

The content below is taken from the original (Rugged module runs Yocto Linux on up to 12-core Xeon-D), to continue reading please visit the site. Remember to respect the Author & Copyright.

Eurotech’s “CPU-161-18” is a headless COM Express Type 6 Compact module with a 12-core Xeon-D, up to 24GB DDR4, PCIe x16, and wide temperature operation. Like Advantech’s SOM-5991, Eurotech’s CPU-161-18 is “server-class” COM Express Type 6 Compact module aimed at high-end embedded applications, and equipped with Intel’s 14nm “Broadwell” based Xeon D-1500 SoCs. The module […]

Posted on in category News

Google launches first developer preview of Android Things, its new IoT platform

The content below is taken from the original (Google launches first developer preview of Android Things, its new IoT platform), to continue reading please visit the site. Remember to respect the Author & Copyright.

Google today announced Android Things, its new comprehensive IoT platform for building smart devices on top of Android APIs and Google’s own services. Android Thing is now available as a developer preview.

Essentially, this is Android for IoT. It combines Google’s earlier efforts around Brillo (which was also Android-based but never saw any major uptake from developers) with its Android developer tools like Android Studio, the Android SDK, Google Play Services and Google’s cloud computing services. Support for Weave, Google’s IoT communications platform that (together with Brillo) makes up Google’s answer to Apple’s HomeKit, is on the roadmap and will come in a later developer preview.

As a Google spokesperson told me, the company sees Android Things as an evolution of Brillo that builds on what the Google learned from this earlier project. Google will work with all early access Brillo users to migrate their projects to Android Things.

Google has partnered with a number of hardware manufacturers to offer solutions based on Intel Edison, NXP Pico and the Raspberry Pi 3. One interesting twist here is that Google will also soon enable all the necessary infrastructure to push Google’s operating system updates and security fixes to these devices.

In addition, Google also today announced that a number of new smart device makers are putting their weight behind Weave. Belkin WeMo, LiFX, Honeywell, Wink, TP-Link and First Alert will adopt the protocol to allow their devices to connect to the Google Assistant and other devices, for example. The Weave platform is also getting an update and a new Device SDK with built-in support for light bulbs, smart plugs, switches and thermostats, with support for more device types coming soon. Weave is also getting a management console and easier access to the Google Assistant.

Google’s IoT platforms have long been a jumble of different ideas and protocols that didn’t always catch on (remember Android@Home from 2011?). It looks like the company is now ready to settle on a single, consolidated approach. Nest Weave, a format that was developed by Nest for Nest, is now being folded into the overall Weave platform, too. So instead of lots of competing and overlapping products, there is now one consolidated approach to IoT from Google — at least for the time being.

Featured Image: JOSH EDELSON/Getty Images

Posted on in category News

Brad Dickinson | You Can Now Easily Connect to Your Raspberry Pi From Anywhere In World With VNC Connect

You Can Now Easily Connect to Your Raspberry Pi From Anywhere In World With VNC Connect

The content below is taken from the original (You Can Now Easily Connect to Your Raspberry Pi From Anywhere In World With VNC Connect), to continue reading please visit the site. Remember to respect the Author & Copyright.

Real VNC is an excellent, easy way to remotely connect to your Raspberry Pi from your home network, but it’s a little confusing for beginners. VNC Connect is a new version that simplifies the process and makes it easy to connect to your Raspberry Pi from outside your network.

Read more…