This glass cabin in Iceland lets you sleep under the northern lights

The content below is taken from the original ( This glass cabin in Iceland lets you sleep under the northern lights), to continue reading please visit the site. Remember to respect the Author & Copyright.

Panorama Glass Lodge is a luxury vacation cabin in Hvalfjörðu, Iceland. Situated directly by the sea, visitors will experience stunning views of the Aurora Borealis from above and reflected off the water below. The structure features an all glass bedroom allowing travelers to experience sleeping under one of the world’s most spectacular light shows. 

Panorama Glass Lodge located in Hvalfjörðu, Iceland. Image: Panorama Glass Lodge.

Panorama Glass Lodge located in Hvalfjörðu, Iceland. Image: Panorama Glass Lodge.

This secluded cabin is the perfect viewing destination due to its removal from any light pollution. 

Panorama Glass Lodge located in Hvalfjörðu, Iceland. Image: Panorama Glass Lodge.

If that wasn’t enough to satisfy, the cabin also includes a hot tub to view the spectacle above. 

Panorama Glass Lodge located in Hvalfjörðu, Iceland. Image: Panorama Glass Lodge.

Plan group trips in Skype with help from TripAdvisor and StubHub

The content below is taken from the original ( Plan group trips in Skype with help from TripAdvisor and StubHub), to continue reading please visit the site. Remember to respect the Author & Copyright.

Bringing TripAdvisor into a group chat is pretty easy — just tap the Add to Chat button and select TripAdvisor from the list of available plug-ins. You can choose a destination, then search for restaurants, hotels and activities in the area. Sharing…

Windows 10 on ARM: Everything you need to know about it

The content below is taken from the original ( Windows 10 on ARM: Everything you need to know about it), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft earlier bought Windows operating system to ARM-based devices with Windows RT for x32 ARM Processors. Windows RT was first announced at CES 2011 and was released as a mobile operating system along with Windows 8 in October 2012. It […]

This post Windows 10 on ARM: Everything you need to know about it is from TheWindowsClub.com.

[Sponsored] Overcoming Remote Desktop Challenges with Remote Desktop Manager

The content below is taken from the original ( [Sponsored] Overcoming Remote Desktop Challenges with Remote Desktop Manager), to continue reading please visit the site. Remember to respect the Author & Copyright.

In today’s corporate environment IT administrators typically need to manage many different remote systems. These systems can be physical systems or they might be VMs. Many times, these systems reside locally as well as in remote locations and in the cloud. For Windows IT administrators the Remote Desktop is the primary tool that the vast majority of IT administrators use every day for these necessary remote management tasks. Remote Desktop enables you to start an interactive session with a remote system that has been configured to allow Remote Desktop access. Remote Desktop opens a window on your local system that contains the desktop of the remote system that you connect to. Your mouse and keyboard actions are sent to the remote system and the interactive session allows you to operate and troubleshoot the remote system very much like you are sitting at a local display. This kind of control and interactive display is essential when you’re trying to troubleshoot problems or configure systems remotely.

In this post, you’ll learn about some of the challenges of using Remote Desktop to manage your enterprise servers and then see some of the best ways that you can address these issues. Many companies use Microsoft’s Remote Desktop Connection Manager for their remote Windows management requirements. However, Remote Desktop Connection Manager has several critical limitations in an enterprise desktop environment. You’ll see how you can address these limitations as well as how Devolutions Remote Desktop Manager provides an enterprise-ready feature set to address your remote management requirements.

Remote Desktop Management Challenges

While managing remote desktops is an essential daily task for most IT administrators it also presents some difficult challenges. Let’s take a closer look at the some of the three main remote desktop management challenges.

  • Managing multiple connectionsOne of the biggest challenges with Remote Desktop is managing and organizing multiple remote connections. Most administrators in medium and larger companies need to connect to dozens if not hundreds of remote systems which can be very difficult to manage. Using RDP files enables you to save your connection settings and optionally your authentication information. This works great for a few systems but it quickly gets very messy and potentially confusing when the number of remote connections grow into dozens or more. Attempting to manually manage connections can result in a lack of standardization, confusion and potential errors.
  • Securing your remote connections – The next biggest remote desktop management challenge is properly securing the remote connections. Just like a traditional desktop environment, passwords are your first line of defense in securing your corporate infrastructure. All accounts with access to Remote Desktop Connections need to require strong passwords. To simplify management, some companies attempt to use the same passwords for multiple accounts – or worse resort to yellow sticky notes — which can create a huge security exposure. You need to ensure that your remote management network connections, passwords, and credentials are all secure. In addition, when you’re dealing with multiple remote systems and access by many different IT personal you need a way of logging access to those systems for auditing and troubleshooting.
  • Connecting to Linux and other heterogeneous hosts – One of the other challenges with remote desktop management is connecting to heterogeneous host systems. Today very few companies only have Windows systems to manage. Instead, most business are using a mix of Windows, Linux, Mac and other non-Windows systems. For most Windows administrators this means they need to use multiple remote management tools. Remote Desktop is limited to the RDP protocol which for the most part restricts its use to Windows systems. While some Linux distributions can be managed with RDP most cannot. This often requires that the administrator has to incorporate multiple tools like VNC, Putty and Apple Remote Desktop in addition to Windows Remote Desktop.

There are several different paths that you can take to clear these hurdles in remote desktop management. You can try to manually organization multiple .rdp files into separate folders with different permission but this can be extremely cumbersome and difficult for large numbers of connections. Instead, many businesses opt to use Microsoft’s Remote Desktop Connection Manager or a third party remote desktop manger like Devolutions Remote Desktop Manager to more effectively manage their remote desktop connections. Let’s take a closer look at using Microsoft’s Remote Desktop Connection Manager and Devolutions Remote Desktop Manager to handle your remote connection requirements.

Microsoft Remote Desktop Connection Manager

One tool that many IT administrators use to help manage their remote desktop management needs is Microsoft’s Remote Desktop Connection Manager. Microsoft’s Remote Desktop Connection Manager (RDCMan) is a free download and it can help you to manage multiple remote desktop connections by centralizing all of your remote desktop connections under a single management console. RDCMan is supported on Windows 10 Tech Preview (Windows 10), Windows 7, Windows 8, Windows 8.1, Windows Server 2008, Windows Server 2008 R2, Windows Server 2012, Windows Server 2012 R2, Windows Server Tech Preview (Windows Server 2016).  RDCMan is a basic tool whose main purpose is to help you to organize remote connections under a single console. You can see an example of Microsoft’s Remote Desktop Connection Manager in Figure 1.

Figure 1 – Microsoft Remote Desktop Connection Manager

As you can see in Figure 1 RDCMan allows you to create groups of different remote systems that you can connect to. Each group is stored in a separate .rdg file which can be exported and shared by other users. It’s important to realize that these are all separate files which can make implementing batch changes and sharing them with multiple users very cumbersome. RDCMan also offers the ability to encrypt your stored credentials using certificates. Remote connections have the ability to inherit settings from the group they are part of or you can customize each remote session. By default, the remote display is rendered in the main frame of the RCDMan console but you also have the option of undocking the remote session. RCDMan is primarily a Windows management tool. It can use RDP to connect to remote Windows sessions and it can also connect to Hyper-V VM console sessions using VMConnect.

RDCMan is adequate for a managing a small number of systems. Unfortunately, it has a number of significant limitations when used in a medium, large business and enterprise scenarios. Some of the main limitations for RCDMan include:

  • No support for Linux and Mac desktops – RDC is primarily designed to be a Windows-only management tool and it doesn’t support the full range of heterogenous servers that are in most businesses today.
  • It is not officially supported by Microsoft – One important thing than many administrators don’t realize is that RDCMan is not an official Microsoft product and it is not supported by Microsoft nor is it kept current. The last update for RDCMan was in 2014.
  • Manual credential entry – RDCMan requires you to manually enter the credential data for your remote sessions. This can be time consuming and can result in errors.
  • Remote Desktop only – RDCMan does not provide any other additional networking tools or capabilities.

Devolutions Remote Desktop Manager

Most businesses have remote desktop management needs that go beyond the basic capabilities provided by Microsoft’s free offering. Devolutions Remote Desktop Manager (RDM) provides the ability to manage multiple remote desktop connections. In addition, RDM offers a far more extensive set of tools and enterprise-level management capabilities then Microsoft’s RDCMan.  Let’s take a closer look at some of the main remote management capabilities and tools offered by RDM.

First, RDM is supported on almost all of today’s popular Windows desktop and server platforms including: Windows Vista SP2, Windows 7 SP1, Windows 8, Windows 8.1, Windows 10, Windows Server 2008 SP2, Windows Server 2008 R2 SP1, Windows Server 2012, Windows Server 2012 R2, and Windows Server 2016. The latest version also requires the Microsoft .NET Framework 4.6. There are both 32-bit and 64-bit versions of RDM.

RDM provides a modern interface that makes use of a ribbon menu and a tabbed interface for each open remote session. You can see Devolutions Remote Desktop Manager in Figure 2.

Figure 2 – Devolutions Remote Desktop Manager

RDM is basically divided into a Navigation pane that you can see on the left side of Figure 2 and the Context Area that you can see on the right. The Navigation pane contains entries which can be remote desktop sessions like you can see in Figure 2 or it can contain a number of other types of entries such as Credentials, Contacts, Documents and Macro/Script/Tools. You can quickly create multiple entries by right-clicking on an entry and then selecting Duplicate Entry from the context menu. You can optionally change the default view in the Navigation pane from the node-tree style view that you see in Figure 2 to a tiled view or a details view. The Content Area is used to display different remote desktop sessions as well as the output of the various embedded commands and tools that are part of RDM.

In Figure 2 you can see that the remote desktop sessions entries are grouped under a data source. Data sources defines where the entries are stored and they can be shared between different users. You can organize all of your different remote session types under the different data sources. For instance, you might have a different data source for each remote location or business unit that you manage. RDM provides granular control over each connection, each group of connections, or each data source. By default, all of the open sessions appear in their own tabs in the Context Area. You can work with each data source and session by right-clicking on it in the Navigation pane. If you are using the tabbed display you can quickly switch between sessions by clicking the desired tab. RDM gives you the option of making your connections appear in tabbed interface or enabling them to be undocked like a standard Remote Desktop connection. For multiple monitor support, RDM enables you to create a container window that is separate from the main window and you can drag and drop open tabs onto the container window.

Group Management

Almost all medium businesses up through the enterprise have multiple people using remote desktop connections and these users are often separated into different locations or application management teams. To facilitate team access RDM’s session connection information can be stored in a number of different types of shared data sources. These data sources and their sessions can be shared by multiple team members. RDM supports the following shared data sources:

  • Amazon S3 – Can be shared in read-only mode. Basic support.
  • Devolutions Online Database – Basic support for micro teams (up to 3 users), Professional and Enterprise editions support larger teams.
  • Devolutions Server – Shared. Supports all features, such as attachments, connection log, Offline Mode and Security Management.
  • Dropbox – Can be shared in read-only mode.
  • FTP – Uses an XML file that can be shared in read-only mode.
  • Google Drive — Shared. Supports all features, such as attachments, connection log, Offline Mode and Security Management.
  • MariaDB — Supports all features, such as attachments, connection log, Offline Mode and Security Management.
  • Microsoft Access – Shared but not recommended as Microsoft doesn’t support it in the newest versions on Windows.
  • Microsoft SQL Azure — Shared. Supports all features, such as attachments, connection log, Offline Mode and Security Management.
  • Microsoft SQL Server – The recommended data source for multiple users. Shared. Supports all features, such as attachments, connection log, Offline Mode and Security Management.
  • MySQL — Shared. Supports all features, such as attachments, connection log, Offline Mode and Security Management
  • SFTP – Can be shared in read-only mode. Basic support.
  • SQLLite — Shared. Supports all features, such as attachments, connection log, Offline Mode and Security Management
  • WebDav — Can be shared in read-only mode. Basic support.

Data sources that are stored in the cloud are typically automatically backed up by the cloud provider. To protect sensitive data in your data sources you can lock the data source configuration before you deploy it. The offline mode allows you to connect to a local copy of the data source when the live database is unavailable. It can be used when a user is working from a disconnected network or when there is any kind of connectivity issue to the data source. RDM’s batch edit feature enables you to easily change the settings of multiple sessions in one operation. For instance, many companies have a 90-day password change cycle which can become a problem when you need to regularly change my passwords for multiple connections. RDM also allows you to associate keywords/tags for your entries facilitating easier searches for related entries.

Multiple Remote Host and Connection Types

Today’s IT infrastructures are typically anything but homogenous. In addition to managing Windows Servers most businesses also need to manage Linux servers and sometimes Mac systems as well. Plus, administrators often need to connect directly to Hyper-V or VMware VMs as well as use other services like FTP or VPNs. Microsoft’s RDCMan is essentially limited to RDP and cannot connect to a good portion of the systems that today’s IT administrators need to manage. The remote connectivity capabilities provided by Devolutions RDM address the full range of connectivity required by today’s businesses. In addition to Windows and RDP, RDM support multiple remote protocols like VNC for Linux connectivity, Apple Remote Desktop, Citrix ICA as, Hyper-V and VMware VMs as well as other remote control/management products like HP Integrated Lights Out (iLO) and LogMeIn. RDM enables you to consolidate your remote management using a single tool to connect to Windows, Linux and other heterogeneous remote systems. You can see the variety of RDM’s support remote connections in Figure 3.

Figure 3 – Remote Desktop Manager’s supported remote connection

To create a new remote session select the desired session type and then RDM will prompt you for that sessions specific configuration properties. As you can see in Figure 3, RDM’s wide array of supported remote sessions enables you address the full range of remote management needs for the enterprise. Some of the most commonly needed remote session types the RDM supports include:

  • Microsoft Remote Desktop (RDP) – For connections to Windows systems
  • VNC – For connections to Linux systems
  • Apple Remote Desktop – For connections to Apple systems
  • Telnet – For connections to various Windows and Linux Telnet hosts
  • FTP, SFTP, SCP & WinSCP – For connections to FTP hosts

Enterprise-level Security

Properly securing your remote desktop connections is essential because of the far-reaching access and administrative capabilities that they provide. RDM provides a number of enterprise-level security features that can enable you secure access to your remote sessions. Passwords are the first level of all security strategies and RDM provides a number of capabilities that can help you to manage remote session passwords. RDM provides centralized remote password management as well as password generation and enforcement of password policies. Centralizing all passwords and enterprise data in one secure location both helps administrators quickly access the information they need as well as enabling it to be kept in one secure location. RDM is able to enforce all of the essential password policies for remote sessions including:

  • Password history – Determines when an old password can be reused
  • Password age – Determines when a user must change their password
  • Minimum password length – Determines the minimum number of characters required for a password
  • Complexity requirements – Ensures that the password can’t contain the user name and that it must use at least three of the four possible character types: lowercase letters, uppercase letters, numbers, and symbols.

Another important security features that RDM provides is the built-in password analyzer. When you supply passwords for your remote sessions RDM’s password analyzer will automatically evaluate the passwords and notify you if they are strong or weak. RDM is also able to automatically generate strong secure passwords. Enabling the Password Audit policy allows you to track all password changes

To handle the remote access security for users with different job responsibilities and remote access requirements RDM provides a role-based security system that enables flexible granular protection. For instance, you might want to create different roles and security settings for your administrators, help desk personal or consultants. RDM’s role-based security enables security settings to be inherited. Child items and folders are automatically covered by a parent folder’s security settings. The specific permissions for a given item can be overridden. You can set permissions on a sub folder or item to override the parent item’s permissions.

RDM also provides several other important remote access security features. First, it has a check-in and check-out feature that enables an administrator to lock down access for a remote session. For instance, if you were performing a long lasting maintenance routine and you didn’t want to allow any other access to the system you could check out the session and other users couldn’t access it until it is checked back in. You can also restrict access to remote sessions based on time. For instance, you might only allow access to some remote sessions during business hours. RDM also supports two-factor authentication provides unambiguous identification. This feature is only available for the following data sources: SQLite, Online Database, Devolutions Server, MariaDB, Microsoft Access, SQL Azure, SQL Server and MySQL.

Logs are another important security feature that RDM provides. RDM logs the usage for all of your different remote sessions and actions. The logs record when sessions are opened and closed along with the duration of the session. They also record when entries are viewed or changed as well as who performed the action.

Remote Management Tools

Effective remote management requires more than just an interactive login to the remote system. In many cases you need to troubleshoot the network connectivity, check the configuration of a remote server or perform a variety of other management and troubleshooting tasks. In addition to remote desktop management, RDM provides a number of handy network management tools that you can use to manage your remote systems. You can see the collection of remote system management tools provided by RDM in Figure 4.

Figure 4- Remote Desktop Manager’s remote management toolset

As you can see in Figure 4 some of the tools provided by RDM’s Tools Dashboard include the ability to connect to Computer Management to the remote system, collect inventory information, perform Wake-On-Lan, run ping, continuous ping, trace route and netstat as well as list the open sessions to the remote systems. Running each of these tools displays the results in a new tab in RDM’s Content Area.

PowerShell Scripting

RDM also supports Windows PowerShell scripting which enables administrators to automate RDM management. RDM supplies a PowerShell Module called RemoteDesktopManager.PowerShellModule.dll which is located in the Remote Desktop Manager installation directory. You can use the Import-Module cmdlet to load the module into your PowerShell sessions. The RDM PowerShell module can be used to automate a wide variety of tasks including:

  • Connecting to data sources
  • Creating databases
  • Loading configurations files
  • Assigning credentials to entries
  • Retrieving session properties
  • Changing group folder and session properties
  • Setting customer roles
  • Importing and exporting CSVs

Enabling Enterprise-Level Remote Desktop Management

Remote desktop management is one of the most important tools used by today’s IT administrators. RDM from Devolutions goes far beyond the basic Windows connectivity offered by Microsoft’s RDCMan. RDM brings enterprise-grade features like connectivity to all the popular server platforms, group management, security and scripting to remote management. RDM lets you centralize all your remote connections, credentials and tools into a single remote management platform that can be securely shared by your administrators and other remote desktop users.

 

The post [Sponsored] Overcoming Remote Desktop Challenges with Remote Desktop Manager appeared first on Petri.

HP’s tiny laser printers are the heigh of a pencil (updated)

The content below is taken from the original ( HP’s tiny laser printers are the heigh of a pencil (updated)), to continue reading please visit the site. Remember to respect the Author & Copyright.

Today, HP revealed the LaserJet Pro M15 and M28 series, which are the smallest laser printers in their class. These tiny printers are about the height length of a No. 2 pencil, yet still are able to print 18–19 pages per minute. These printers are als…

Screw luxury fridges, you can now run webOS on your Raspberry Pi

The content below is taken from the original ( Screw luxury fridges, you can now run webOS on your Raspberry Pi), to continue reading please visit the site. Remember to respect the Author & Copyright.

Telly software goes open source (again)

The mighty little OS that could is open source again. LG has revealed webOS OSE (Open Source Edition) under an Apache licence and ported it to the Raspberry Pi hardware.…

Google Play Instant lets you try games without having to install them

The content below is taken from the original ( Google Play Instant lets you try games without having to install them), to continue reading please visit the site. Remember to respect the Author & Copyright.

Last year, Google launched Instant Apps, a way for developers to give users a native app experience that didn’t involve having to install anything. Users would simply click on a link on the search results page and the instant app would load. Today, Today, the company is extending this program to games. Thanks to this, you can now see what playing a level or two of Clash Royale, Final Fantasy XV: A New Empire or Panda Pop is like without having to go through the usual install procedure. Instead, you simply head for the Google Play store, find a game that supports this feature, and hit the “Try now” button.

Google Play product managers Jonathan Karmel and Benjamin Frenkel told me that the team learned a lot from the experience with building Instant Apps. For games, though, the team decided to increase the maximum file size from 2 MB to 10 MB, which isn’t really a surprise, given that a game needs a few more graphical assets than your regular to-do list app. In my experience testing this feature, this still allows the games to load quickly enough, though it doesn’t feel quite as instant as most of the regular instant apps do.

The main idea behind this project, Karmel and Frenkel said, is to drive discovery. To do this, the team is adding a new ‘arcade’ tab in the newly redesigned Google Play Games app to highlight the current crop of Instant games and launching an Instant Gameplay collection in the Google Play Store. The main advantage of these Instant games, though, is that users can try the game without having to install anything. As the team noted, every extra step in the install process offers potential players yet another chance to drop off and move on. Indeed, many users actually install a game and then never open it.

Some casual games already take up less than 10 MB and those developers will be able to opt to make their complete game available as a Play Instant app, too.

For now, this project is still a closed beta, though Google plans to open it up to more developers later this year. Some games that currently support Play Instant include Clash Royale, Words with Friends 2, Bubble Witch 3 Saga and Panda Pop, as well as a few other titles from Playtika, Jam City, MZ, and Hothead.

As Karmel and Frenkel told me, their teams are still working on providing developers with better tooling for building these apps and Google is also working with the likes of Unity and the Cocos2D-x teams to make building instant apps easier. For the most part, though, building an Instant Play game means bringing the file size to under 10 MB and adding a few lines to the app’s manifest. That’s probably easier said than done, though, given that you still want players to have an interesting experience.

Unsurprisingly, some developers currently make better use of that limited file size than others. When you try Final Fantasy XV: A New Empire, all you can do is regularly tap on some kind of blue monster and get some gold until the game informs you how much gold you received. That’s it. Over time, though, I’m sure developers will figure out how to best use this feature.

BlackBerry and Microsoft team up to make work phones more secure

The content below is taken from the original ( BlackBerry and Microsoft team up to make work phones more secure), to continue reading please visit the site. Remember to respect the Author & Copyright.

BlackBerry and Microsoft may have been bitter foes before their smartphone dreams came crashing down, but they're becoming close allies now that they're focused on services. The two have unveiled a partnership that helps you seamlessly use Microsoft'…

Bitcoin’s blockchain could become hazardous waste dump

The content below is taken from the original ( Bitcoin’s blockchain could become hazardous waste dump), to continue reading please visit the site. Remember to respect the Author & Copyright.

Boffins warn of legal risks from arbitrary data distribution

Bitcoin’s blockchain can be loaded with sensitive, unlawful or malicious data, raising potential legal problems in most of the world, according to boffins based in Germany.…

Aerones makes really big drones for cleaning turbines and saving lives

The content below is taken from the original ( Aerones makes really big drones for cleaning turbines and saving lives), to continue reading please visit the site. Remember to respect the Author & Copyright.

Enthusiasts will talk your ear off about the potential for drones to take over many of our dirtiest, dullest and most dangerous tasks. But most of the jobs we’ve actually seen drones perform are focused on the camera — from wildlife surveying to monitoring cracks on power plant smokestack.

Aerones is working on something much larger. The Y Combinator-backed startup is building giant drones with 28 motors and 16 batteries, capable of lifting up to 400 pounds. That kind of payload means the drones can actually perform a broad range of potential tasks to address the aforementioned three Ds.

The company launched two and a half years ago, led by a trio of founders that had already collaborated on a number of projects, including a GPS fleet management system and an electric race car. The team is still lean, with seven employees, most of whom are engineers. The company was bootstrapped with its founders money, but has since raised around half a million euros, all told. Founded in Latvia, Aerones has relocated to Mountain View in search of seed money, after signing on with Y Combinator.

And team already has quite a bit to show for its work, with several videos demonstrating Aerones’ robust system. The drone looks like four quadcopter tethered together — using this configuration, the craft can put out fires, perform search and rescue missions and clean the sides of tall buildings. After over a year of showing off the product’s sheer brute strength in a series of videos, the drones are ready to be put to real world use. 

“Over the last two months, we’ve been very actively talking to wind turbine owners,” CEO Janis Putrams told TechCrunch on a call this week. “We have lots of interest and letters of intent in Texas, Spain, Turkey, South America for wind turbine cleaning. And in places like Canada, the Nordic and Europe for de-icing. If the weather is close to freezing, ice builds up, and they have to stop the turbine.”

For now, the company is testing the system on private property and in countries where regulatory issues don’t prohibit flight. The company plans to start monetizing their drones as part of a cleaning service, rather than selling the product outright to clients. Among other things, the deal allows Aerones to continue to develop the drone hardware and software to allow for a more robust and longer lasting system.

At present, the drones are operating with a tether that keeps them from drifting, while delivering power to the 28 on-board motors. Unplugged, the drones can carry a payload for around 12 minutes. That could be sufficient for a search and rescue mission, but the battery technology will have to improve for the system to perform other extended tasks without being connected directly to a power source.

3D Printed Antenna is Broadband

The content below is taken from the original ( 3D Printed Antenna is Broadband), to continue reading please visit the site. Remember to respect the Author & Copyright.

Antennas are a tricky thing, most of them have a fairly narrow range of frequencies where they work well. But there are a few designs that can be very broadband, such as the discone antenna. If you haven’t seen one before, the antenna looks like — well — a disk and a cone. There are lots of ways to make one, but [mkarliner] used a 3D printer and some aluminum tape to create one and was nice enough to share the plans with the Internet.

As built, the antenna works from 400 MHz and up, so it can cover some ham bands and ADS-B frequencies. The plastic parts act as an anchor and allow for coax routing. In addition, the printed parts can hold a one-inch mast for mounting.

Generally, a discone will have a frequency range ratio of at least 10:1. That means if the lower limit is 400 MHz, you can expect the antenna to work well up to around 4 GHz. The antenna dates back to 1945 when [Armig G. Kandoian] received a patent on the design. If you want to learn more about the theory behind this antenna, you might enjoy the video, below.

You often see high-frequency discones made of solid metal, or — in this case — tape. However, at lower frequencies where the antenna becomes large, it is more common to see the surfaces approximated by wires which reduces cost, weight, and wind loading.

As an example, we looked at an antenna made from garden wire. Perhaps the opposite of a discone is a loop antenna which works only on a very narrow range of frequencies.

RFID Unlock Your PC, Because You’re 1337

The content below is taken from the original ( RFID Unlock Your PC, Because You’re 1337), to continue reading please visit the site. Remember to respect the Author & Copyright.

Ever wanted to feel like one of those movie hackers from the late 90s? Yes, your basement’s full of overclocked Linux rigs and you’ve made sure all your terminal windows are set to green text on a black background, but that’s not always enough. What you need is an RFID tag that unlocks your PC when you touch the reader with your RFID cardOnly then may you resume blasting away at your many keyboards in your valiant attempts to hack the mainframe.

[Luke] brings us this build, having wanted an easier way to log in quickly without foregoing basic security. Seeing as an RC522 RFID reader was already on hand, this became the basis for the project. The reader is laced up with a Sparkfun Pro Micro Arduino clone, with both devices serendipitously running on 3.3V, obviating the need for any level shifters. Code is simple, based on the existing Arduino RC522 library. Upon a successful scan of the correct tag, the Arduino acts as a HID keyboard and types the user’s password into the computer along with a carriage return, unlocking the machine. Simple!

Overall, it’s a tidy build that achieves what [Luke] set out to do. It’s something that could be readily replicated with a handful of parts and a day’s work. If you’re interested in the underlying specifics, we’ve discussed turning Arduinos into USB keyboards before.

8 DevOps tools that smoothed our migration from AWS to GCP: Tamr

The content below is taken from the original ( 8 DevOps tools that smoothed our migration from AWS to GCP: Tamr), to continue reading please visit the site. Remember to respect the Author & Copyright.

Editor’s note: If you recently migrated from one cloud provider to another—or are thinking about making the move—you understand the value of avoiding vendor lock-in by using third-party tools. Tamr, a data unification provider, recently made the switch from AWS to Google Cloud Platform, bringing with them a variety of DevOps tools to help with the migration and day-to-day operations. Check out their recommendations for everything from configuration management to storage to user management.

Here at Tamr, we recently migrated from AWS to Google Cloud Platform (GCP), for a wide variety of reasons, including more consistent compute performance, cheaper machines, preemptible machines and better committed usage stories, to name a few. The larger story of our migration itself is worth its own blog post, which will be coming in the future, but today, we’d like to walk through the tools that we used internally that allowed us to make the switch in the first place. Because of these tools, we migrated with no downtime and were able to re-use almost all of the automation/management code we’d developed internally over the past couple of years.

We attribute a big part of our success to having been a DevOps shop for the past few years. When we first built out our DevOps department, we knew that we needed to be as flexible as possible. From day one, we had a set of goals that would drive our decisions as a team, and which technologies we would use. Those goals have proved themselves as they have held up over time, and more recently allowed us to seamlessly migrate our platform from AWS to GCP and Google Compute Engine.

Here were our goals. Some you’ll recognize as common DevOps mantras, others were more specific to our organization:

  • Automate everything, and its corollary, “everything is code”
  • Treat servers as cattle, not pets
  • Scale our devops team sublinearly in relation to the number of servers and services we support 
  • Don’t be tied into one vendor/cloud ecosystem. Flexibility matters, as we also ship our entire stack and install it on-prem at our customers sites

Our first goal was well defined and simple. We wanted all operation tasks to be fully automated. Full stop. Though we would have to build our own tooling in some cases, for the most part there’s a very rich set of open source tools out there that can solve 95% of our automation problems with very little effort. And by defining everything as code, we could easily review each change and version everything in git.

Treating servers as cattle, not pets is core to the DevOps philosophy. Server “pets” have names like postgres-master, and require you to maintain them by hand. That is, you run commands on it via a shell and upgrade settings and packages yourself. Instead, we wanted to focus on primitives like the amount of cores and RAM that our services need to run. We also wanted to kill any server in the cluster at any time without having to notify anyone. This makes doing maintenance much easier and streamlined, as we would be able to do rolling restarts of every server in our fleet. It also ties into our first goal of automating everything.

We also wanted to keep our DevOps team in check. We knew from the get-go that to be successful, we would be running our platform across large fleets of servers. Doing things by hand requires us to hire and train a large number of operators just to run through set runbooks. By automating everything and investing in tooling we can scale the number of systems we maintain without having to hire as many people.

Finally, we didn’t want to get tied into one single vendor cloud ecosystem, for both business reasons—we deploy our stack at customer sites—and because we didn’t want to be held hostage by any one cloud provider. To avoid getting locked into a cloud’s proprietary services, we would have to run most things ourselves on our own set of servers. While you may choose to use equivalent services from their cloud provider, we like the independence of this go-it-alone approach.

Our DevOps toolbox

1. Server/Configuration management: Ansible 

Picking a configuration management system should be the very first thing you do when building out your DevOps toolbox, because you’ll be using it on every server that you have. For configuration management, we chose to use Ansible; it’s one of the simpler tools to get started with, and you can use it on just about any Linux machine.

You can use Ansible in many different ways: as a scripting language, as a parallel ssh client, and as a traditional configuration management tool. We opted to use it as a configuration management tool and set up our code base following Ansible best practices. In addition to the best practices layed out in the documentation, we went one step further and made all of our Ansible code fully idempotent—that is, we expect to be able to run Ansible at any time, and as long as everything is already up-to-date, for it to not have to make any changes. We also try and make sure that any package upgrades in Ansible have the correct handlers to ensure a zero downtime deployment.

We were able to use our entire Ansible code base in both the AWS and GCP environments without having to change any of our actual code. The only things that we needed to change were our dynamic inventory scripts, which are just Python scripts that Ansible executes to find the machines in your environment. Ansible playbooks allow you to use multiple of these dynamic inventory scripts simultaneously, allowing us to run Ansible across both clouds at once.

That said, Ansible might not be the right fit for everyone. It can be rather slow for some things and isn’t always ideal in an autoscaling environment, as it’s a push-based system, not pull-based (like Puppet and Chef). Some alternatives to Ansible are the afore-mentioned Puppet and Chef, as well as Salt. They all solve the same general problem (automatic configuration of servers) but are optimized for specific use cases.

2. Infrastructure configuration: Terraform

When it comes to setting up infrastructure such as VPCs, DNS and load balancers, administrators sometimes set up cloud services by hand, then forget they are there, or how they configured them. (I’m guilty of this myself.) The story goes like this: we need a couple of machines to test an integration with a vendor. The vendor wants shell access to the machines to walk us through problems and requests an isolated environment. A month or two goes by and everything is running smoothly, and it’s time to set up a production environment based on the development environment. Do you remember what you did to set it up? What settings you customized? That is where infrastructure-as-code configuration tools can be a lifesaver.

Terraform allows you to codify the settings and infrastructure in your cloud environments using its domain specific language (DSL). It handles everything for you (cloud integrations, and ordering of operations for creating resources) and allows you to provision resources across multiple cloud platforms. For example, in Terraform, you can create DNS records in Google DNS that reference a resource in AWS. This allows you to easily link resources across multiple environments and provision complex networking environments as code. Most cloud providers have a tool for managing resources as code: AWS has CloudFormation, Google has Cloud Deployment Manager, and Openstack has Heat Orchestration Templates. Terraform effectively acts as a superset of all these tools and provides a universal format across all platforms.

3. Server imaging: Packer 

One of the basic building blocks of a cloud environment is a Virtual Machine (VM) image. In AWS, there’s a marketplace with AMI images for just about anything, but we often needed to install tools onto our servers beyond the basic services included in the AMI. For example, think Threatstack agents that monitor the activity on the server and scan packages on the server for CVEs. As a result, it was often easier to just build our own images. We also build custom images for our customers and need to share them into their various cloud accounts. These images need to be available to different regions, as do our own base images that we use internally as the basis for our VMs. Having a consistent way to build images independent of a specific cloud provider and a region is a huge benefit.

We use Packer, in conjunction with our Ansible code base, to build all of our images. Packer provides the framework to spin up machines, runs our Ansible code, then saves a copy of the snapshot of the machine into our account. Because Packer is integrated with configuration management tools, it allowed us to define everything in the AMIs as source code. This allows us to easily version images and have confidence that we know exactly what’s in our images. It made reproducing problems that customers had with our images trivial, and allowed us to easily generate changelogs for images.

The bigger benefit that we experienced was that when we switched to Compute Engine, we were able to reuse everything we had in AWS. All we needed to change was a couple of lines in Packer to tell it to use Compute Engine instead of AWS. We didn’t have to change anything to our base images that developers use day-to-day or the base images that we use in our compute clusters.

4. Containers: Docker

When we first started building out our infrastructure at Tamr, we knew that we wanted to use containers as I had used them at my previous company and seen how powerful and useful they can be at scale. Internally we have standardized on Docker as our primary container format. It allows us to build a single shippable artifact for a service that we can run on any Linux system. This gives us portability between Linux operating systems without significant effort. In fact, we’ve been able to Dockerize most of our system dependencies throughout the stack, to simplify bootstrapping from a vanilla Linux system.

5 and 6. Container and service orchestration: Mesos + Marathon

Containers in and of themselves don’t inherently provide scale or high availability on their own. Docker itself is just a piece of the puzzle. To fully leverage containers you need something to manage them and provide management hooks. This is where a container orchestration comes in. It allows you to link together your containers and use them to build up services in a consistent, fault-tolerant way.

For our stack we use Apache Mesos as the basis of our compute clusters. Mesos is basically a distributed kernel for scheduling tasks on servers. It acts as a broker for requests from frameworks to resources (cpu, memory, disk, gpus) available on machines in the Mesos cluster. One of the most common frameworks for Mesos is Marathon, which ships as part Mesosphere’s commercial DC/OS (Data Center Operating System), the main interface for launching tasks onto a Mesos cluster. Internally we deploy all of our services and dependencies on top of a custom Mesos cluster. We spent a fair amount of time building our own deployment/packaging tool on top of Marathon for shipping releases and handling deployments. (Down the road we hope to open source this tool, in addition to writing a few blog posts about it).

The Mesos + Marathon approach for hosting services is so flexible that during our migration from AWS to GCP, we were able to span our primary cluster across both clouds. As a result, we were able to slowly switch services running on the cluster from one cloud to another using Marathon constraints. As we were switching over, we simply spun up more Compute Engine machines and then deprecated machines on the AWS side. After a couple of days, all of our services were running on Compute Engine machines, and off of AWS.

However, if we were building our infrastructure from scratch today, we would heavily consider building on top of Kubernetes rather than Mesos. Kubernetes has come a long way since we started building out our infrastructure, but it just wasn’t ready at the time. I highly recommend Google Kubernetes Engine as a starting point for organizations starting to dip their toes into the container orchestration waters. Even though it’s a managed service, the fact that it’s based on open-source Kubernetes ensures minimized the risk of cloud lock-in.

7. User management: JumpCloud 

One of the first problems we dealt with in our AWS environment was how to provide ssh access to our servers to our development team. Before we automated server provisioning, developers often created a new root key every time they spun up an instance. We soon consolidated to one shared key. Then we upgraded to running an internal LDAP instance. As the organization grew, managing that LDAP server became a pain—we were definitely treating it as a pet. So we went looking for a hosted LDAP/Active Directory offering, which led us JumpCloud. After working with them, we ended up using their agent on our servers instead of an LDAP connector, even though they have a hosted LDAP endpoint that we do use for other things. The JumpCloud agent syncs with JumpCloud and provisions users and groups and ssh keys onto the server automatically for us. JumpCloud also provides a self-service portal for developers updating their ssh keys. This means that we now spend almost no time actually managing access to our servers; it’s all fully automated.

It’s worth noting that access to machines on Compute Engine is completely different than AWS. With GCP, users can use the gcloud command line interface (CLI) to gain access to a machine. The CLI generates a ssh key, and provisions it onto the server and creates a user account on the machine (for example, here’s a sample command is `gcloud compute –project “gce-project” ssh –zone “us-east1-b” “my-machine-name”`). In addition, users can upload their ssh-keys/users pairs in the console and new machines will have those users accounts set up on launch of a machine. In other words, the problem of how to provide ssh access to developers that we ran into in on AWS doesn’t exist on Compute Engine.

JumpCloud solved a specific problem with AWS, but provides a portable solution across both GCP and AWS. Using it with GCP works great, however if you’re 100% on GCP, you don’t need to rely on an additional external service such as JumpCloud to manage your users.

8. Storage: RexRay 

Given that we run a large amount of services on top of a Mesos cluster we needed a way to provide persistent storage to Docker containers running there. Since we treat servers as cattle not pets (we expect to be able to kill any one server at any time), using Mesos local persistent storage wasn’t an option for us. We ended up using RexRay as an interface for provisioning/mounting disks into containers. RexRay acts as the bridge on a server between disks and a remote storage provider. Its main interface is a Docker storage driver plugin that can make API calls to a wide variety of sources (AWS, GCP, EMC, Digital Ocean and many more) and mount the provisioned storage into a Docker container. In our case, we were using EBS volumes on AWS and persistent disks on Compute Engine. Because RexRay is implemented as a Docker plugin, the only thing we had to change between the environments was the config file with the Compute Engine vs. AWS settings. We didn’t have to change any of our upstream invocations for disk resources.


DevOps = Freedom 

From the viewpoint of our DevOps team, these tools enabled a smooth migration, without much manual effort. Most things only required updating a couple of config files to be able to talk to Compute Engine APIs. At the top layers in our stack that our developers use, we were able to switch to Compute Engine with no development workflow changes, and zero downtime. Going forward, we see being able to span across and between clouds at will as a competitive advantage, and this would not be possible without the investment we made into our tooling.

Love our list of essential DevOps tools? Hate it? Leave us a note in the comments—we’d love to hear from you. To learn more about Tamr and our data unification service, visit our website.

EU copyright proposal could be massive issue for code sharing sites

The content below is taken from the original ( EU copyright proposal could be massive issue for code sharing sites), to continue reading please visit the site. Remember to respect the Author & Copyright.

GitHub et al would be required to act as copyright police

Code repository GitHub has raised the alarm about a pending European copyright proposal could force it to implement automated filtering systems – referred to by detractors as “censorship machines” – that would hinder developers working with free and open source software.…

Toshiba shows off an AR headset running on Windows 10

The content below is taken from the original ( Toshiba shows off an AR headset running on Windows 10), to continue reading please visit the site. Remember to respect the Author & Copyright.

 Today, Toshiba showed off an AR headset for the enterprise world called the dynaEdge. The unsexy headset is pretty unremarkable in most regards, what distinguishes it is the fact that there’s actually a full-blown Windows 10 Professional PC attached to it. Coming in a $1,899 package, the product essentially weds a Google Glass-like heads-up display from Vuzix and a tethered… Read More

IBM thinks Notes and Domino can rise again

The content below is taken from the original ( IBM thinks Notes and Domino can rise again), to continue reading please visit the site. Remember to respect the Author & Copyright.

But first, the big catchup: adding proper mail, scalability, mobility and JavaScript for devs

IBM and HCL have outlined their plans for the Notes/Domino portfolio that the former offloaded to the latter last year.…

How to get real ROI from your move to the cloud

The content below is taken from the original ( How to get real ROI from your move to the cloud), to continue reading please visit the site. Remember to respect the Author & Copyright.

The ROI of cloud computing is confusing.  We were talking opex versus capex years ago, and then noticed that there was agility and time-to-market advantages as well. As cloud value metrics evolve, I’ve noted another key value indictor: the commitment to cloud computing.  

If you’re an enterprise that is going to cloud in fits and starts, you’re not likely to ind the value in cloud computing. Indeed, moving to cloud computing that way means that you’re only moving some workloads to a public cloud; others that could be moved won’t be moved. And I mean workloads that are a good fit for the cloud aren’t being moved; there are of course always workloads that don’t belong in the cloud.

To read this article in full, please click here

Monitoring the Removal of Office 365 Groups (and Teams)

The content below is taken from the original ( Monitoring the Removal of Office 365 Groups (and Teams)), to continue reading please visit the site. Remember to respect the Author & Copyright.

Group Soft-deleted Recovery

Group Soft-deleted Recovery

Preventing Office 365 Group Owners Doing the Wrong Thing

I recently received a question from a reader asking if there was any way to prevent the owners Office 365 Groups from being able to remove groups. The fear is that someone will go ahead and remove a group that holds important information.

The answer is that you can do nothing to prevent an owner removing a group, including all the resources associated with the group – mailbox, team site, team, plan, notebook, and so on. Office 365 gives tenant administrators the tools to restrict group creation but offers nothing to stop group removal. Owners are all-powerful when it comes to their group.

Different in the Cloud

If you worked with SharePoint on-premises, granting this kind of authority to group owners might seem excessive. In the world of SharePoint on-premises, it’s a big deal to create a site collection and those who have control over site collections tend to be people who know their way around SharePoint permissions and administrative functions.

Things are a lot more democratic in the cloud, at least in this respect. Every Office 365 group (or team) has its own site collection. And every group or team has its own set of owners that have all the control in the world over the site collection due to radically simplified membership model used by Office 365 Groups. Remember, there are only two types of permissions – owners and members, and members enjoy the rights to access any resource available to the group, up to and including the right to remove content.

Moving from the structured, controlled, and permission-tight on-premises world to Office 365 needs a cultural shift on the part of tenant administrators. New apps like Group and Teams bring new ways of working that do not sit well with some, but it’s indicative of a transition for applications like SharePoint and Exchange that are the center of their own universes on-premises to become providers of functionality in the cloud. SharePoint Online makes document management functionality available to Office 365 apps; Exchange does likewise for mailbox and calendar functionality. It’s a big step change.

Options to Control Group Deletion

To return to the original question, is there anything that you can do to check deletions of groups and their associated resources? Well, there’s nothing available in the Office 365 Admin Center to address the problem, so you must create your own solution. And because neither the cmdlets used to work with Office 365 Groups or Teams support the Exchange RBAC model, we cannot make the Remove-UnifiedGroup or Remove-Team cmdlets unavailable to team owners. But here are a few options for you to consider.

First, Paul Cunningham has a potential solution on Practical365.com. In this case, you run PowerShell scripts to track changes made to Office 365 Groups in a tenant, including deleted groups. You can then review the list of deleted groups and decide whether you should recover any of the groups.

Second, you could apply Office 365 classification labels as the default label for the SharePoint document libraries used by groups. When a library has a default label, SharePoint stamps all the existing documents in the library with the label and applies the label to new documents upon creation. This approach will not stop owners removing groups, but it does make sure that you will not lose important documents if you forget to recover a deleted group.

Third, you write your own solution with PowerShell to exploit the fact that Office 365 keeps deleted groups in a soft-deleted state for 30 days. All deleted groups go through this stage whether an owner or administrator removes a group through OWA, PowerShell, Teams, Planner, the Exchange Admin Center, the Office 365 Admin Center, or a mobile app. During this time, you can recover a deleted group and restore it to full health.

What you might want to do is create a script to run daily to:

  • Use the Get-AzureADMSDeletedGroup cmdlet to create a list of soft-deleted groups. For example, this command lists all soft-deleted groups in ascending date order, so that the groups approaching the end of their soft-deleted period appear first.
Get-AzureADMSDeletedGroup | Sort DeletedDateTime | Format-Table Id, DisplayName, DeletedDateTime, Description -AutoSize

  • Email the list to tenant administrators for review, potentially highlighting groups due for permanent deletion in the next five days. You could use something like this to focus on groups that need attention.
$CheckDate = (Get-Date).AddDays(-5)
$Today = (Get-Date)
$Grp = (Get-AzureADMSDeletedGroup | Sort DeletedDateTime | Select Id, DisplayName, DeletedDateTime, Description)
$Grp
ForEach ($G in $Grp) {
        If ($G.DeletedDateTime -le $CheckDate) {
           $TimeToGo = ($G.DeletedDateTime).AddDays(30) - $Today 
           $Line = $G.DisplayName + " is due for permanent removal on " + ($G.DeletedDateTime).AddDays(30) + " You have " + $TimeToGo.Days + " days and about " + $TimeToGo.Hours + " hours to recover the group."
           Write-Host $Line -Foregroundcolor Red 
     }
 }

Recovery Can Take Time

Recovery of a soft-deleted group is a matter of running the Restore-AzureADMSDeletedDirectoryObject cmdlet (see this article). Behind the scenes, Office 365 synchronizes details of the restored group to the different workloads to make applications aware that the group is back in business. Synchronization to application directories like EXODS (for Exchange Online) or SPODS (SharePoint Online) is rapid. It takes a little longer for all the resources managed by the workloads to become available.

Teams is invariably the last workload to complete, and it can take up to 24 hours following recovery before a restored team appears in clients. There is no obvious reason why Teams should be so slow except that the background processes to reconnect chats and media from the underlying Azure services take their own good time.

Sponsored

Cultural Transitions

This is only one example of how people who want the way things worked on-premises to be the same in the cloud are invariably disappointed. Office 365 used to be composed of thinly-disguised versions of on-premises applications. It’s a very different place now, and if you don’t change your thinking and evolve to keep pace with the cloud, you’re not going to be happy.

Follow Tony on Twitter @12Knocksinna.

Want to know more about how to manage Office 365? Find what you need to know in “Office 365 for IT Pros”, the most comprehensive eBook covering all aspects of Office 365. Available in PDF and EPUB formats (suitable for iBooks) or for Amazon Kindle.

The post Monitoring the Removal of Office 365 Groups (and Teams) appeared first on Petri.

OneLogin Becomes First Identity-as-a-Service (IDaaS) Provider to Unify Access for On-Premise and Cloud Applications

The content below is taken from the original ( OneLogin Becomes First Identity-as-a-Service (IDaaS) Provider to Unify Access for On-Premise and Cloud Applications), to continue reading please visit the site. Remember to respect the Author & Copyright.

OneLogin , the industry leader in Unified Access Management, today announced OneLogin Access – a new product that for the first time allows companies Read more at VMblog.com.

Amazon is reportedly bringing Alexa to businesses soon

The content below is taken from the original ( Amazon is reportedly bringing Alexa to businesses soon), to continue reading please visit the site. Remember to respect the Author & Copyright.

We all know that Amazon is constantly looking for ways to grow and expand. That's why it really doesn't come as a surprise that Ina Fried at Axios is reporting that Amazon is getting into the business sector with Alexa. Alexa for Business will be aim…

Digging Into A Couple of the Hybrid Cloud Best Business Practices

The content below is taken from the original ( Digging Into A Couple of the Hybrid Cloud Best Business Practices), to continue reading please visit the site. Remember to respect the Author & Copyright.

The hybrid cloud has been widely adopted by businesses of all sizes and it is expected to continue to grow. According to a study conducted by MarketsandMarkets, a B2B research firm, the hybrid cloud market is expanding at a compound annual growth rate of 22.5%. There’s no doubt that the hybrid cloud can and is being used in lots of different ways. However, following some best practices can allow your business to get more out of the hybrid cloud.  Let’s take a closer look at some of the best hybrid cloud practices that businesses are adopting today.

Focus on security

Security is rarely an exciting topic but in these days of increasing cloud adoption, high profile exploits and growing threats like ransomware security has been pushed squarely to the forefront of most business’s IT priorities. Increasing utilization of the hybrid cloud requires a strong focus on security as the cloud can be used to store sensitive information and it potentially provides near global access to that data.

One of the most important best practices for effective hybrid cloud security is the use of federated identity management like Azure AD or AWS Directory Service for Microsoft Active Directory. Federated identity enables you to integrate your on-premise AD and the cloud streamlining user access with single-sign-on capabilities. Utilizing multi-factor authentication can also help boost the security for your cloud resources – especially for connectivity from today’s devices like phones and tablets. Multi-factor authentication adds an extra layer of protection from a trusted device or biometric data on top of your username and password. Microsoft has an Azure solution called Multi-factor Authentication (MFA) service and Amazon has AWS Multi-Factor Authentication to enable you to implement multi-factor authentication for your cloud resources.

Another important security best practice is protecting your data that’s stored in the cloud. While many businesses don’t utilize encryption for their local data in the cloud it becomes far more important. Data in the cloud can be potentially accessed far easier than local data behind your firewalls and VLANs making data encryption in the cloud a necessity.

Ramp up development and testing efficiency

Using the cloud as a development resource is another best practice that many organizations have adopted. The hybrid cloud can be a huge time saver for developers and testers. Today’s development and testing processes often require spinning up and deleting VMs on a regular basis – sometimes many times a day. Taking advantage of the hybrid cloud for development and testing can free administrators from the need to continually allocate and deallocate VMs and other resources for developers. Developers can leverage the cloud’s self-service capabilities to provision their own VMs and other resources in the cloud without needing to involve IT.

Leveraging the hybrid cloud for help desk support is another important best practice employed by many organizations. Helpdesk personnel often need to replicate, test and then dispose of many different desktops, platforms and environments. The cloud can provide an efficient environment to store and reuse many different end-user images.

Leverage the cloud for on-premise backups

Many businesses have adopted the cloud for use as an offsite backup location for their on-premise data. Using the cloud as a backup target can solve two of the organization’s biggest storage issues. The first is dealing with the incredibly rapid growth rate of data. Many research firms have pointed out that data is growing at very rapid rate. For instance, IDC estimates that data is doubling every year. Dealing with that data not only requires continually adding more local storage for applications but it also means that additional storage will be required for backups. Leveraging the hybrid cloud for your backups enables you to take advantage of low-cost cloud storage as well as freeing high-performance local storage for your business-critical applications.

In addition, using the cloud for backups enables you to better meet the 3-2-1 rule of backup protection improving your ability to recover your critical data in the event of a failure or a malware or ransomware attack. The hybrid cloud provides an additional type of backup media as well as providing separate “air-gapped” offsite backup storage. Many restore operations fail due to media errors and the cloud provides an additional backup media type to restore from. Cloud backups can also be stored separately and accessed with different security credentials from your online applications. This separation provides an extra level of protection from malware and ransomware attacks.

 

The post Digging Into A Couple of the Hybrid Cloud Best Business Practices appeared first on Petri.

Let’s Encrypt updates certificate automation, adds splats

The content below is taken from the original ( Let’s Encrypt updates certificate automation, adds splats), to continue reading please visit the site. Remember to respect the Author & Copyright.

ACME v2 and Wildcard Certificates now live

Let’s Encrypt has updated its certificate automation support and added Wildcard Certificates to its system.…

Raspberry Pi Gets Faster CPU and Better Networking in the New Model 3 B+

The content below is taken from the original ( Raspberry Pi Gets Faster CPU and Better Networking in the New Model 3 B+), to continue reading please visit the site. Remember to respect the Author & Copyright.

While the Raspberry Pi’s birthday (and the traditional release date for the newest and best Pi) was a few weeks ago, Pi Day is a fitting enough date for the introduction of the best Pi to date. The Raspberry Pi 3 Model B+ is the latest from the Raspberry Pi foundation. It’s faster, it has better networking, and most interestingly, the Pi 3 Model B+ comes with modular compliance certification, allowing anyone to put the Pi into a product with vastly reduced compliance testing.






A Small Speed Boost For The CPU, A Huge Leap For The LAN

When the Raspberry Pi was first announced, it was heralded as a legitimate desktop computer, capable of everything from word processing to web browsing, all for less than $40. The first batch of Pis sold like hotcakes, but using this computer as a desktop replacement was a slightly frustrating experience. With the release of the Raspberry Pi 3 in 2016, this changed. The Pi was fast enough and the software was good enough that, yes, this was a capable computer suitable for light web work and even a few computationally expensive tasks. Add onboard wireless, and the Pi 3 Model B was a great computer.

The newest member of the Raspberry Pi family remains a great computer, but don’t expect a truly massive speedup from this upgrade. The processor is still the Broadcom BCM2837 found in the Raspberry Pi 3, a quad-core A53, 64-bit CPU. There is a slight upgrade over the Raspberry Pi 3; thanks to improved power integrity, thermal design, and possibly a metal can over the CPU, the Raspberry Pi 3 Model B+ now runs at 1.4 GHz, instead of the 1.2 GHz of its predecessor.

The most visually striking difference between the old Pi 3 and the Pi 3 Model B+ is the embossed metal shield over the RF guts of the board. This houses the new, dual-band 2.4 and 5GHz wireless LAN, and Bluetooth 4.2/BLE. The Pi 3 used a BCM43438, which only supported 2.4GHz WiFi, whereas the new wireless chipset is significantly more capable and able to work with 5GHz networks.

But that metal shield covering the new wireless chipset isn’t just for decoration. The Raspberry Pi 3 Model B+ comes with modular compliance certification. This allows the Pi 3 Model B to be used in products with significantly reduced compliance testing.

Much Better Wired Networking

The new LAN7515 USB and Ethernet controller

While these are welcome changes, this isn’t the biggest reveal for the Pi 3 Model B+. Before the introduction of wireless on the Pi 3, the Ethernet was severely constrained by the LAN9514 USB hub and Ethernet controller. This chip provided the four USB ports and Ethernet to the Pi’s SoC, but networking was limited to 100 Mbps in the best case, and somewhere around 80 Mbps in real-world usage.

The Pi 3 Model B+ changes this by replacing the USB and Ethernet controller with a LAN7515. It’s still a capable USB 2.0 hub and Ethernet controller, but this one gives the Pi 3 Model B+ 300 Mbps Ethernet. It’s a great feature if you’re using a Pi as a home server, or just want to send a lot of data to a Pi over a wired network.

So What’s In The Can?

Being the first Raspberry Pi featuring an RF shield, there is the obvious question of what’s under the can?. The bad news is, removing that RF shield will void any warranty, allow the Pi to spew RF everywhere, and there will be no hope of meeting compliance. The good news is that there are some really cool components under there.



The chip responsible for all the wireless functionality is a CYW43455, a Cypress and/or Broadcom part capable of 802.11ac with support for 2.4 and 5GHz WiFi, and Bluetooth 4.2. The Raspberry Pi 3 Model B — last year’s model — featured a BCM43438, that did not include support for 5GHz radios or Bluetooth 4.2.

It’s a welcome addition, but the real story here is the RF shield that helped secure this board’s modular compliance certification. Now you can use this board in a product and won’t have to pay for the expensive intentional radiator testing required of all new products featuring their own home-spun radios.

Power over Ethernet (PoE) Header

Designing a new Pi hat? Make sure to take these headers into account.

Although I wouldn’t necessarily call it a failing of the latest Pi, there is something you might want to watch out for. The addition of Power over Ethernet (with an add-on hat), may get in the way of other Pi hats.

The PoE header is placed next to the USB ports, right under one of the Pi’s mounting holes and next to the 40-pin header. These pins are the same height as the 40-pin header, and I can easily envision a situation where already existing Pi hats will interfere with the PoE header.

This is also right where the ‘Run’ header was placed in the Pi 3 Model B, and I’m sure there are a few products out there that make mechanical use of this header designed for reset buttons. Is it terribly broken? No, but it will ruin somebody’s day eventually.

While it’s not a Raspberry Pi with SATA or PCIe or whatever people with unrealistic expectations are clamoring for, the Raspberry Pi 3 Model B+ is a capable and desirable upgrade for what is now the most popular computer on the planet for varying definitions of ‘computer’.

IBM partners with Cloudflare to launch new security and DDoS protection features

The content below is taken from the original ( IBM partners with Cloudflare to launch new security and DDoS protection features), to continue reading please visit the site. Remember to respect the Author & Copyright.

Over the course of the last few years, Cloudflare built a global network of data center locations and partnerships to expand its DDoS protection, security tools and website acceleration services. That kind of expertise is hard to beat, so maybe it doesn’t come as a surprise that even a global technology juggernaut like IBM today announced […]

RIP Stephen Hawking: 10 of the Physicist’s Greatest Quotes

The content below is taken from the original ( RIP Stephen Hawking: 10 of the Physicist’s Greatest Quotes), to continue reading please visit the site. Remember to respect the Author & Copyright.

World-renowned British physicist Stephen Hawking has died at age 76. Diagnosed with motor neurone disease at age 21, doctors gave Hawking only a few years to live. But the cosmologist and author, famed […]

The post RIP Stephen Hawking: 10 of the Physicist’s Greatest Quotes appeared first on Geek.com.