Review: The RC2014 Micro Single-Board Z80 Retrocomputer

The content below is taken from the original ( Review: The RC2014 Micro Single-Board Z80 Retrocomputer), to continue reading please visit the site. Remember to respect the Author & Copyright.

At the end of August I made the trip to Hebden Bridge to give a talk at OSHCamp 2019, a weekend of interesting stuff in the Yorkshire Dales. Instead of a badge, this event gives each attendee an electronic kit provided by a sponsor, and this year’s one was particularly interesting. The RC2014 Micro is the latest iteration of the RC2014 Z80-based retrocomputer, and it’s a single-board computer that strips the RC2014 down to a bare minimum. Time to spend an evening in the hackerspace assembling it, to take a look!

It’s An SBC, But Not As You Know It!

The kit contents
The kit contents

The kit arrives in a very compact heat-sealed anti-static packet, and upon opening was revealed to contain the PCB, a piece of foam carrying the integrated circuits, a few passives, and a very simple getting started and assembly guide. The simplicity of the design becomes obvious from the chip count, there’s the Z80 itself, a 6850 UART, 27C512 ROM, 62256 RAM, 74HCT04 for clock generation, and a 74HCT32 for address decoding. The quick-start is adequate, but there is also a set of more comprehensive online instructions (PDF) available.

I added chip sockets and jumpers to my kit.
I added chip sockets and jumpers to my kit.

Assembly of a through-hole kit is hardly challenging, though this one is about as densely-packed as it’s possible to make a through-hole kit with DIP integrated circuits. As with most through-hole projects, the order you pick is everything: resistors first, then capacitors, reset button and crystal, followed by integrated circuits.

I’m always a bit shy about soldering ICs directly to a circuit board so I supplemented my kit with sockets and jumpers. The jumpers are used to select an FTDI power source and ROM addresses for Grant Searle’s ROM BASIC distribution or Steve Cousins’ SCM 1.0 machine code monitor, and the kit instructions recommended hard-wiring them with cut-off resistor wires. There was no row of pins for the expansion bus because this kit was supplied without the backplane that’s a feature of the larger RC2014 kits, but it did have a set of right-angle pins for an FTDI serial cable.

Your Arduino Doesn’t Have A Development Environment On Board!

Having assembled my RC2014 Mini and given it a visual inspection it was time to power it up and see whether it worked. Installing the jumper for FTDI power, I attached my serial cable and plugged it into a USB port.

A really nice touch is that the Micro has the colours for the serial cable wires on the reverse side of the PCB, taking away the worry of getting it the wrong way round. A quick screen /dev/ttyUSB0 115200 to get a serial terminal from a bash prompt, hit the reset button, and I was rewarded with a BASIC interpreter. My RC2014 Micro worked first time, and I could straight away give it BASIC commands such as PRINT "Hello World!" and be rewarded with the expected output.

The SCM ROM monitor.
The SCM ROM monitor.

So I’ve built a little Z80 single board computer, and with considerably less work than that required for the fully modular version of the RC2014. Its creator Spencer tells me that the Micro was originally designed as a bargain-basement RC2014 as a multibuy for workshops and similar activities, being very similar to his RC2014 mini board but without provision for a Pi Zero terminal and a few other components. It lacks the extra hardware required for a more comprehensive operating system such as CP/M, so I’m left with about as minimal an 8-bit computer as it’s possible to build using parts available in 2019. My question then is this: What can I do with it?

So. What Can I Do With An 8-bit SBC?

My first computer was a Sinclair ZX81, how could it possibly compare this small kit that was a giveaway at a conference? Although the Sinclair included a black-and-white TV display interface, tape backup interface, and keyboard, the core computing power was not too far different in its abilities from this RC2014 Micro — after all, it’s the same processor chip. It was the platform that introduced a much younger me to computing, and straight away I devoured Sinclair BASIC and then went on to write machine code on it. It became a general-purpose calculation and computing scratchpad for repetitive homework due to the ease of BASIC programming, and with my Maplin 8255 I/O port card I was able to use it in the way a modern tech-aware kid might use an Arduino.

The RC2014 Micro is well placed to fill all of those  functions as a BASIC and machine code learning platform on which to get down to the hardware in a way you simply can’t on most modern computers, and though the Arduino represents a far more sensible choice for hardware interfacing there is also an RC2014 backplane and I/O board available for the Micro’s expansion bus should you wish to have a go. Will I use it for these things? It’s certainly much more convenient than its full-sized sibling, so it’s quite likely I’ll be getting my hands dirty with a little bit of Z80 code. It’s astounding how much you can forget in 35 years!

The RC2014 Micro can be bought from Spencer’s Tindie store, with substantial bulk discounts for those workshop customers. If you want the full retrocomputer experience it’s a good choice as it provides about as simple a way into Z80 hardware and software as possible. The cost of simplicity comes in having no non-volatile storage and in lacking the hardware to run CP/M, but it has to be borne in mind that it’s the bottom of the RC2014 range. For comparison you can read our review of the original RC2014, over which we’d say the chief advantage of the Micro is its relative ease of construction.

Free Dynamic DNS based on Cloudflare

The content below is taken from the original ( in /r/ selfhosted), to continue reading please visit the site. Remember to respect the Author & Copyright.

http://bit.ly/3372Yrs

Hate loud office radio as much as my team does? We took matters into our own hands. [rant]

The content below is taken from the original ( in /r/ sysadmin), to continue reading please visit the site. Remember to respect the Author & Copyright.

This is going to sound petty, but when the person in question doesn't listen to multiple requests to resolve the situation, you have to do something about it. Sorry in advance for the rant.

TL;DR I set up a script to monitor SONOS volume across the network and aggressively turn it down when necessary.

A couple of years ago, one of the C-levels, who is in the office maybe once every 2 months, decided it would be a great idea to spend money on a few SONOS speakers dotted around the office. Speaker locations include the main (open-plan) office, as well as the break room. Nobody asked for this, 90% of the office was happy in a nice quiet environment. To make it worse, multiple requests for "Can we get rid of the SONOS" have been met with "No."

There's a few problems with the situation:

  1. We aren't allowed to change the station. It's playing 70s/80s hits, and the average demographic in the office is a 25 y.o. male.
  2. Someone not in my team keeps turning the volume up to a level that is absolutely unacceptable for an open plan office. People are trying to make support calls and just get shit done in general.
  3. Noone wants to listen to Marcia Hines on their lunch break – we just want to chat about video games and football.

After coming into the office and hearing ABBA blasting across the room for the 50th time, I'd had enough. Surely someone has written a script that can communicate with the SONOS across the network so I don't have to keep whipping my phone out to set volume levels to something acceptable??

Cue this gem – https://gallery.technet.microsoft.com/SONOS-PowerShell-500c9878

I used this script as a base, and edited it to support the 'GetVolume' command, as well as supporting multiple devices read from an external CSV file.

Each device has a Name, IP, and VolumeLimit. When you run the script with the -auto switch, it polls the devices on the network every 10 seconds, and checks if the device is above the volume level set in the CSV file. If the volume is above the specified maximum, it immediately hits the endpoint to set the volume back down to the limit. I've set a limit for the main office to "just above background noise", and for the break room the limit is zero.

The script is working perfectly and has already saved us from bleeding ears a few times this week. Whoever has been turning it up hasn't mentioned that the volume control doesn't seem to work anymore…

If anyone wants the script, happy to chuck it on paste bin or something.

EDIT: script here https://pastebin.com/hmc3QA8z

Leaving Fitbit, Export years of data to Apple Health?

The content below is taken from the original ( in /r/ fitbit), to continue reading please visit the site. Remember to respect the Author & Copyright.

Is there a way to export my Fitbit data to Apple health?

I’ve been recommended Sync Solver but the description says only does daily syncs.

I am looking for historical sync then plan to abandon the Fitbit.

Happy Cray Day HPE Completes the Acquisition

The content below is taken from the original ( Happy Cray Day HPE Completes the Acquisition), to continue reading please visit the site. Remember to respect the Author & Copyright.

HPE completed the acquisition of the high-performance computing powerhouse Cray. Now the work begins to deliver on synergies

The post Happy Cray Day HPE Completes the Acquisition appeared first on ServeTheHome.

Datadog Announces General Availability of Support for Alibaba Cloud

The content below is taken from the original ( Datadog Announces General Availability of Support for Alibaba Cloud), to continue reading please visit the site. Remember to respect the Author & Copyright.

Datadog, the monitoring and analytics platform for developers, IT operations teams and business users in the cloud age, today announced the general… Read more at VMblog.com.

Digging into the Back Up Myths

The content below is taken from the original ( Digging into the Back Up Myths), to continue reading please visit the site. Remember to respect the Author & Copyright.


Backup is the foundation of all disaster recovery (DR) plans. If you encounter some type of system or site failure, backups are your final line of defense and can make the difference between being able to restore your IT services and extended downtime. Even though everyone performs backups, there are still a number of myths that persist about the process. Let’s take a closer look at some of the main backup myths.

You only need to back up your data

Some smaller businesses, in particular, make the mistake of thinking that they really only need to back up the data that changes like their files and documents. Smaller businesses often don’t have a lot of excess storage capacity and reducing what you backup to just your documents and other frequently changing can be appealing. It reduces the amount of storage required as well as the backup time. However, in the event of a system failure, it drastically increases the recovery time because you would need to essentially build a replacement system and reinstall an OS and any required applications as well as perform configuration. Having a full system or VM backup enables you to be up and running much faster.

Doing a local backup is all you really need

While performing a local on-premises backup is certainly a requirement it does not enough provide you with complete DR protection. It’s true that your local backup will be the one that you will use for most restore operations. However, it does not provide protection from site failure. The 3-2-1 rule of backup essentially states that you should have at least three copies of your data. You should store your backup copies on two different types of media and at least one backup copy offsite. The 3-2-1 rule protects you from media failure as well as site failure.

If your backup succeeds you’re protected

Backup success is certainly better than a backup failure. However, just having a successful backup isn’t enough to ensure that you can restore the backup. There can be a number of reasons that you can’t restore a successful backup. Even if you use backup verification as a part of your backup process that’s no guarantee that your backups can be restored. For example, a media failure or a device failure could prevent your backups from being restored.  A recent Dell EMC study showed that 27 percent of the respondents had cases where they were unable to recover data using their existing data protection solutions. The only way to be sure that your backups can be restored is to actually test the restore process – at least periodically. Some backup products have the ability to automatically test restores for you.

Your backups are safe from ransomware and malware

Backups are certainly one of the best protections from ransomware and other malware attacks. However, just having a backup doesn’t mean that you’re protected against ransomware. Ransomware has evolved and several stains of today’s ransomware are capable of actually targeting backups. These strains of ransomware can use worms to move through your networks and once they are established they can potentially corrupt any local backup. Using air-gapped backups – where there is a physical separation of the backups from your network — is the best way to secure your backups against potential ransomware attacks. It’s also recommended that you use different authentication for your air-gapped backups as an additional layer of security.

The cloud is too slow to use as a backup target

Today, the cloud has become a very viable backup target. However, some people think that the cloud is too slow to use as a backup target. The fact is that cloud storage itself can be very fast. However, the overall backup speed is governed by your bandwidth to the cloud and many times the downstream bandwidth is significantly higher than the upstream bandwidth. This can be adjusted depending on the agreement with your ISP. For higher speed requirements there are direct cloud connections like Azure ExpressRoute and AWS Direct Connect. In addition, If you’re considering using the cloud as a backup target remember that it’s essential to make sure that you encrypt the data that’s in transit.

Backing up to tape is outdated

Tapes have been around forever and they certainly aren’t the most advanced backup technology available these days. However, that doesn’t mean that they can’t be a part a part of your DR strategy. In fact, because of threats like ransomware tape backups have made something of a recent comeback. While not as fast as disk or cloud backups, tapes have very high capacity, they are stored offline and today many business are using them for long term archiving and data protection. Offline tape storage can provide protection from ransomware and can also help meet certain regulatory requirements.

The post Digging into the Back Up Myths appeared first on Petri.

Megabots Pulls the Plug, Puts Eagle Prime on eBay

The content below is taken from the original ( Megabots Pulls the Plug, Puts Eagle Prime on eBay), to continue reading please visit the site. Remember to respect the Author & Copyright.

Megabots had an inspiring run, starting as part of a giant robot arm demo’d at Comic-Con, turning into a number of compelling crowdfunding and viral video campaigns, and leading up to a much anticipated but somewhat disappointing bout between their massive, $2.5 million robot and a Japanese counterpart. Now, despite […]

Read more on MAKE

The post Megabots Pulls the Plug, Puts Eagle Prime on eBay appeared first on Make: DIY Projects and Ideas for Makers.

My Little Bromium: HP Inc inks security deal to slurp micro-VM slinger

The content below is taken from the original ( My Little Bromium: HP Inc inks security deal to slurp micro-VM slinger), to continue reading please visit the site. Remember to respect the Author & Copyright.

Tech runs browsers in sandbox to humiliate malware

Ink seller HP is buying endpoint security company Bromium, which already comes bundled with some HP computers under the Sure Click brand.…

Samsung introduces SSDs it claims will ‘never die’

The content below is taken from the original ( Samsung introduces SSDs it claims will ‘never die’), to continue reading please visit the site. Remember to respect the Author & Copyright.

Solid-state drives (SSDs) operate by writing to cells within the chip, and after so many writes, the cell eventually dies off and can no longer be written to. For that reason, SSDs have more actual capacity than listed. A 1TB drive, for example, has about 1.2TB of capacity, and as chips die off from repeated writes, new ones are brought online to keep the 1TB capacity.

But that’s for gradual wear. Sometimes SSDs just up and die completely, and without warning after a whole chip fails, not just a few cells. So Samsung is trying to address that with a new generation of SSD memory chips with a technology it calls fail-in-place (FIP).

To read this article in full, please click here

Free compute instance on Oracle Cloud

The content below is taken from the original ( in /r/ selfhosted), to continue reading please visit the site. Remember to respect the Author & Copyright.

Didn't see this posted, but you can get two free compute instances on Oracle Cloud for an unlimited amount of time.

https://www.oracle.com/cloud/free/

Best way to manage 100-200 Raspberry Pi?

The content below is taken from the original ( in /r/ sysadmin), to continue reading please visit the site. Remember to respect the Author & Copyright.

Hey r/sysadmin,

I am in the process of configuring a fleet of Raspberry Pi's and i am not sure how i want to remotely manage them. The Raspberry Pi's will run CentOS 7 and will be rolled out to multiple different customers and therefore all be in different networks out of my control.

I have no experience managing that many boxes, could you point me in the right direction? The OS does not have to be CentOS, it just seemed like the most stable OS to use. The Raspberry Pi's will run a Node.js app that will send data from connected devices that we sell to our Webserver so customers can view it.

Thanks in advance! 🙂

IBM will soon launch a 53-qubit quantum computer

The content below is taken from the original ( IBM will soon launch a 53-qubit quantum computer), to continue reading please visit the site. Remember to respect the Author & Copyright.

IBM continues to push its quantum computing efforts forward and today announced that it will soon make a 53-qubit quantum computer available to clients of its IBM Q Network. The new system, which is scheduled to go online in the middle of next month, will be the largest universal quantum computer available for external use yet.

The new machine will be part of IBM’s new Quantum Computation Center in New York State, which the company also announced today. The new center, which is essentially a data center for IBM’s quantum machines, will also feature five 20-qubit machines, but that number will grow to 14 within the next month. IBM promises a 95 percent service availability for its quantum machines.

IBM notes that the new 53-qubit system introduces a number of new techniques that enable the company to launch larger, more reliable systems for cloud deployments. It features more compact custom electronics for improves scaling and lower error rates, as well as a new processor design.

ibm q

“Our global momentum has been extraordinary since we put the very first quantum computer on the cloud in 2016, with the goal of moving quantum computing beyond isolated lab experiments that only a handful organizations could do, into the hands of tens of thousands of users,” said Dario Gil, the director of IBM Research. “The single goal of this passionate community is to achieve what we call Quantum Advantage, producing powerful quantum systems that can ultimately solve real problems facing our clients that are not viable using today’s classical methods alone, and by making even more IBM Quantum systems available we believe that goal is achievable.”

The fact that IBM is now opening this Quantum Computation itself, of course, is a pretty good indication about how serious the company is about its quantum efforts. The company’s quantum program also now supports 80 partnerships with commercial clients, academic institutions and research laboratories. Some of these have started to use the available machines to work on real-world problems, though the current state of the art in quantum computing is still now quite ready for solving anything but toy problems and testing basic algorithms.

GymCam Knows Exactly What You’ve Been Doing In The Gym

The content below is taken from the original ( GymCam Knows Exactly What You’ve Been Doing In The Gym), to continue reading please visit the site. Remember to respect the Author & Copyright.

Getting exact statistics on one’s physical activities at the gym, is not an easy feat. While most people these days are familiar with or even regularly use one of those motion-based trackers on their wrist, there’s a big question as to their accuracy. After all, it’s all based on the motions of just one’s wrist, which as we know leads to amusing results in the tracker app when one does things like waving or clapping one’s hands, and cannot track leg exercises at the gym.

To get around the issue of limited sensor data, researchers at Carnegie Mellon University (Pittsburgh, USA) developed a system based around a camera and machine vision algorithms. While other camera solutions that attempt this suffer from occlusion while trying to track individual people as accurately as possible, this new system instead doesn’t try to track people’s joints, but merely motion at specific exercise machines by looking for repetitive motion in the scene.

The basic concept is that repetitive motion usually indicates forms of exercise, and that no two people at the same type of machine will ever be fully in sync with their motions, so that merely a handful of pixels suffice to track motion at that machine by a single person. This also negates many privacy issues, as the resolution doesn’t have to be high enough to see faces or track joints with any degree of accuracy.

In experiments at the university’s gym, the accuracy of their system over 5 days and 42 hours of video. Detecting exercise activities in the scene was with a 99.6% accuracy, disambiguating between simultaneous activities was 84.6% accurate, while recognizing exercise types was 93.6% accurate. Ultimately repetition counts for specific exercises were within 1.7 counts.

Maybe an extended version of this would be a flying drone capturing one’s outside activities, giving one finally that 100% accurate exercise account while jogging?

Thanks to [Qes] for sending this one in!

Most Useful PowerShell Cmdlets for Managing and Securing Active Directory

The content below is taken from the original ( Most Useful PowerShell Cmdlets for Managing and Securing Active Directory), to continue reading please visit the site. Remember to respect the Author & Copyright.


In this article, I show you how to manage and secure Active Directory using PowerShell. I’ll look at the most useful PowerShell cmdlets and give examples of how to use them.

Create New Active Directory Users

The New-ADUser cmdlet is for creating new AD users. You can optionally specify where to create new users with the -Path parameter. In the example below, the new user will be created in the Accounts Organizational Unit (OU). The -Server parameter is also optional. It is used to determine on which domain controller (DC) the new user will be created. Note that you cannot specify a password in plaintext in the -AccountPassword parameter. You must convert it to a secure string using the ConvertTo-SecureString cmdlet.

New-ADUser -DisplayName:"Russell Smith" -GivenName:"Russell" -Name:"Russell Smith" -Path:"OU=Accounts,DC=ad,DC=contoso,DC=com" -SamAccountName:"russellsmith" -Server:"dc1.ad.contoso.com" -Surname:"Smith" -Type:"user" -AccountPassword (ConvertTo-SecureString Pas$W0rd!!11 -AsPlainText -Force) -Enabled $true

Create Active Directory Groups

Adding groups to AD is easy with the New-ADGroup cmdlet. -Server and -Path parameters are both optional.

New-ADGroup -GroupCategory:"Security" -GroupScope:"Global" -Name:"Netwrix" -Path:"OU=Accounts,DC=ad,DC=contoso,DC=com" -SamAccountName:"Netwrix" -Server:"dc1.ad.contoso.com"

Add Users to Groups

Once you have some users and groups in your domain, you can add users to groups with the Add-ADGroupMember cmdlet.

Add-ADGroupMember -Identity Netwrix -Members russellsmith,bob.trent

Create New Organizational Units

Use the New-ADOrganizationalUnit cmdlet to create new Organizational Units (OU) in AD. Note that the -ProtectedFromAccidentalDeletion flag is optional. When set to $true, you can’t delete the OU without first changing the status of the flag to $false.

New-ADOrganizationalUnit -Name:"Sensitive" -Path:"OU=Accounts,DC=ad,DC=contoso,DC=com" -ProtectedFromAccidentalDeletion:$true -Server:"dc1.ad.contoso.com"

Deleting Active Directory Objects

The ‘Remove’ verb is used in AD cmdlets to delete objects. Remove-ADUser and Remove-ADGroup are used respectively to delete users and groups.

Remove-ADUser -Identity russellsmith
Remove-ADGroup -Identity Netwrix

Before you can delete an OU, you need to set the accidental deletion flag to false using Set-ADObject.

Set-ADObject -Identity:"OU=Sensitive,OU=Accounts,DC=ad,DC=contoso,DC=com" -ProtectedFromAccidentalDeletion:$false -Server:"dc1.ad.contoso.com"

Remove-ADOrganizationalUnit -Identity "OU=Sensitive,OU=Accounts,DC=ad,DC=contoso,DC=com"

Import Users from a CSV File

PowerShell makes it easy to automate tasks. In the script below, I use a comma-delimited (CSV) text file to create two users with the Import-Csv and New-ADUser cmdlets. The only trickery involved is splitting the first field of each entry in the text file so that I can separate the first and surnames for the -GiveName and -Surname parameters of the New-ADUser cmdlet.

Import-Csv -Path c:\temp\users.csv | ForEach-Object {

    $givenName = $_.name.split()[0]

    $surname = $_.name.split()[1]

    New-ADUser -Name $_.name -Enabled $true –GivenName $givenName –Surname $surname -Accountpassword (ConvertTo-SecureString $_.password -AsPlainText -Force) -ChangePasswordAtLogon $true -SamAccountName $_.samaccountname –UserPrincipalName ($_.samaccountname+”@ad.contoso.com”) -City $_.city -Department $_.department

}

The first line of the text file contains the field names. You can add as many users as you want.

Name,samAccountName,Password,City,Department
Russell Smith,smithrussell,PassW0rd!!11,London,IT
David Jones,jonesdavid,4SHH$$#AAAHh,New York,Accounts

Move AD Objects

The Move-ADObject cmdlet is for moving AD objects. In the example below, I move a user account from the Accounts OU to the Users container.

Move-ADObject -Identity "CN=Russell Smith,OU=Accounts,DC=ad,DC=contoso,DC=com" -TargetPath "CN=Users,DC=ad,DC=contoso,DC=com"

Link a Group Policy Object

While PowerShell can’t be used to create Group Policy Objects (GPO), it can be used to perform other tasks related to Group Policy. The New-GPLink cmdlet is used to link existing GPOs to OUs. In the example below, I link a GPO called Firewall Settings to the Accounts OU.

New-GPLink -Name "Firewall Settings" -Target "OU=Accounts,DC=ad,DC=contoso,DC=com" -LinkEnabled Yes -Enforced Yes

Active Directory Reporting

The Get-ADObject cmdlet can be used to filter the directory and display information about objects. In the example below, I use a filter to find the Accounts OU and then pipe the results to the Get-GPInheritence cmdlet. Select-Object is then used to extract information about the GPOs linked to the OU.

Get-ADObject -Filter {name -like "Accounts*"} | Get-GPInheritance | Select-Object -Expand gpolinks | ForEach-Object {Get-GPO -Guid $_.gpoid}

One of the most useful cmdlets for AD admins is the Search-ADAccount cmdlet. In the example below, I search the domain for locked out user accounts and automatically unlock them using Unlock-ADAccount.

Search-ADAccount –LockedOut | Unlock-ADAccount

Get-ADObject can be used with complex filters. Here I list all objects created after the specified date ($Date).

$Date = [Datetime]"02/07/2019"
Get-ADObject -Filter 'WhenCreated -GT $Date'

Filters can get quite complex. In the next command, I list all deleted objects where the change attribute is later than the specified date, and that can be restored, excluding the Deleted Objects container.

Get-ADObject -Filter 'whenChanged -gt $Date -and isDeleted -eq $True -and name -ne "Deleted Objects"' -IncludeDeletedObjects

Finally, I use Get-EventLog to search the event logs on each DC for login event ID 4624. Note the use of Get-ADDomainController to return all the DCs in the domain. Once I’ve retrieved the necessary information, I use Write-Host to write the output to the terminal window, with information separated by tabs to make it easier to read.

$DCs = Get-ADDomainController -Filter *
 
$startDate = (get-date).AddDays(-1)
 
foreach ($DC in $DCs){
$slogonevents = Get-Eventlog -LogName Security -ComputerName $DC.Hostname -after $startDate | Where-Object {$_.eventID -eq 4624 }}
 
 foreach ($e in $slogonevents){

 if (($e.EventID -eq 4624 ) -and ($e.ReplacementStrings[8] -eq 2)){
 write-host "Type: Local Logon`tDate: "$e.TimeGenerated "`tStatus: Success`tUser: "$e.ReplacementStrings[5] "`tWorkstation: "$e.ReplacementStrings[11]
 }

 if (($e.EventID -eq 4624 ) -and ($e.ReplacementStrings[8] -eq 10)){
 write-host "Type: Remote Logon`tDate: "$e.TimeGenerated "`tStatus: Success`tUser: "$e.ReplacementStrings[5] "`tWorkstation: "$e.ReplacementStrings[11] "`tIP Address: "$e.ReplacementStrings[18]
 }}

 

The post Most Useful PowerShell Cmdlets for Managing and Securing Active Directory appeared first on Petri.

The mainframe business is alive and well, as IBM announces new z15

The content below is taken from the original ( The mainframe business is alive and well, as IBM announces new z15), to continue reading please visit the site. Remember to respect the Author & Copyright.

It’s easy to think about mainframes as some technology dinosaur, but the fact is these machines remain a key component of many large organizations’ organization’s computing strategies. Today, IBM announced the latest in their line of mainframe computers, the z15. Z15.

For starters, as you would probably expect, these are big and powerful machines capable of handling enormous workloads. For example, this baby can process up to 1 trillion web transactions a day and handle 2.4 million Docker containers, while offering unparalleled security to go with that performance. This includes the ability to encrypt data once, and it stays encrypted, even when it leaves the system, a huge advantage for companies with a hybrid strategy.

Speaking of which, you may recall that IBM bought Red Hat last year for $34 billion. That deal closed in July and the companies have been working to incorporate Red Hat technology across the IBM business including the z line of mainframes.

IBM announced last month that it was making OpenShift, Red Hat’s Kubernetes-based cloud-native tools, available on the mainframe running Linux. This should enable developers, who have been working on OpenShift on other systems, systems to move seamlessly to the mainframe without special training.

IBM sees the mainframe as a bridge for hybrid computing environments, offering a highly secure place for data that when combined with Red Hat’s tools, can enable companies to have a single control plane for applications and data wherever it lives.

While it could be tough to justify the cost of these machines in the age of cloud computing, Ray Wang, founder and principal analyst at Constellation Research, says it could be more cost-effective than the cloud for certain customers. “If you are a new customer, and currently in the cloud and develop on Linux, then in the long run the economics are there to be cheaper than public cloud if you have a lot of IO, and need to get to a high degree of encryption and security,” security” he said.

He added, “The main point is that if you are worried about being held hostage by public cloud vendors on pricing, in the long run the z Z is a cost-effective and secure option for owning compute power and working in a multi-cloud, hybrid cloud world.”

Companies like airlines and financial services companies continue to use mainframes, and while they need the power these massive machines provide, they need to do so in a more modern context. The z15 is designed to provide that link to the future, while giving these companies the power they need.

Expanded Azure Maps coverage, preview of Azure Maps feedback site, and more

The content below is taken from the original ( Expanded Azure Maps coverage, preview of Azure Maps feedback site, and more), to continue reading please visit the site. Remember to respect the Author & Copyright.

This blog post was co-authored by Ricky Brundritt, Principal Technical Program Manager, Azure Maps.

Azure Maps services continue to expand our support for Microsoft enterprise customers’ needs in Azure. And, we’ve been busy expanding our capabilities. Today we’re announcing Azure Maps is now available in Argentina, India, Morocco, and Pakistan. We have also launched a new Azure Maps data feedback site that is now in preview. In addition, we’re also introducing several enhancements that are available via our Representational state transfer (REST) services and Azure Maps web and Android SDKs.

Here is a run-down of the new features:

Azure Maps is available in new countries and regions

Azure Maps is now available in Argentina, India, Morocco, and Pakistan and these regions require specific consideration for using maps. Azure Maps will now empower our customers to use the appropriate map views in these regions. To learn more about how to request data via our REST services and SDKs for the new regions and countries listed above, please see our Azure Maps localization page.

Introducing preview of Azure Maps data feedback site

To serve the freshest map data as possible to our customers and as an easy way to provide map data feedback, we’re introducing the Azure Maps data feedback site. The new site empowers our customers to provide direct data feedback, especially on business points of interest and residential addresses. The feedback goes directly to our data providers and their map editors who can quickly evaluate and incorporate feedback into our mapping products. To learn how to provide different types of feedback using the Azure Maps feedback site, please see our How-to guide.

Azure Maps Feedback Site

REST service enhancements

Point of interest data updates

When requesting point of interest data, you might want to restrict the results to specific brands. For example, your scenario is to only show gas stations under a specific brand to your end users. To support this, we’ve added the capability to include one or multiple brands in your request to limit the search results. To learn more, please see our How-to Guide article where we share useful tips to call data via Azure Maps search services.

In addition, Azure Maps now returns hours of operation for points of interest like business listings. We return the opening hours for the next week, starting with the current day in the local time of the point of interest. This information can be used to better optimize your planned routes, and for example, show end users store locations that are open during a specific timeframe.

Sunset and sunrise times

According to a recent report from the Global Alliance for Buildings and Construction, buildings construction and operations account for 36 percent of global final energy use and nearly 40 percent of energy-related carbon dioxide emissions when upstream power generation is considered. To create impact with IoT and help to combat climate change changes and optimize buildings for energy efficiency, Get Timezone by Coordinates API now returns sunset and sunrise times for a given coordinate location. Developers can automate device messages in their IoT solutions, for example, by building rules to schedule heating and cooling by using sunrise and sunset times combined with telemetry messages from a variety of devices and sensors. 

Cartography and styling updates

Point of interest data rendering

To provide richer and more informative map data content, we’ve pushed up certain point of interest data so that certain categories appear at higher levels. As a result, airport icons are rendered at zoom levels 10 to 22.

image

Point of interest icons for important tourist attractions like museums, and railway and metro stations are displayed on zoom levels 12 to 22. In addition, universities, colleges, and schools are shown on zoom levels 13 to 22.

image

State boundaries and abbreviated state names

To improve usability and give more detailed views, state boundaries are pushed up in the data so that they appear already at zoom level 3. Abbreviated state names are also now shown in zoom level.

Azure Maps State Boundaries Update

Blank map styles in web SDK

Often it is useful to be able to visualize data on top of a blank canvas or to replace the base maps with custom tile layers. With this in the mind the Azure Maps web SDK now supports two new map styles; blank and blank_accessible. The blank map style will not render any base map data, nor will it update the screen reader on where the map is centered over. The blank_ accessible style will continue to provide screen reader updates with location details of where the map is located, even though the base map is not displayed. Please note, you can change the background color of web SDK by using the CSS background-color style of the map DIV element.

Web SDK enhancements

The Azure Maps team has made many additions and improvements to the web SDK. Below is a closer look at some of the key improvements.

Cluster aggregates

Clustering of point data based on zoom level can be done to reduce the visual clutter on the map and make it easier to make sense of the data. Often clusters are represented using a symbol with the number of points that are within the cluster, however sometimes you may want to further customize the style of clusters based on a metric like the total revenue of all points within a cluster. With cluster aggregates, custom properties can be created and populated using an aggregate expression. To learn more please see our Azure Maps documentation.

image

Aggregating data in clusters

Image templates

The Azure Maps web SDK uses WebGL for rendering most data on the map. Symbol layers can be used to render points on the map with an image, line layers can have images rendered along it, and polygon layers can be rendered with a fill pattern image. In order to ensure good performance, these images need to be loaded into the map image sprite resource before rendering. The web SDK already provides a couple of images of markers in a handful of colors, however, there is an infinite number of color combinations that developers may want to use. With this in mind we have ported the SVG template functionality for HTML markers over to the image sprite and have added 42 image templates, 27 symbol icons, and 15 polygon fill patterns. You can easily define a primary and secondary color as well as a scale for each template when loading it into the map image sprite. These templates can also be used with HTML markers as well. Check out our documentation and see our Try it now tool to learn more.

image

Images can be used HTML markers and various layers within the Azure Maps Web SDK

Additional notable improvements to the web SDK:

  • Accessibility improvements – The team has spent a lot of time improving accessibility in the web SDK and ensuring that every user is able to use the map. A major part of this consisted of leveraging the vector tiles of the base map so that we can provide highly accurate descriptions of what the map is rendering.
  • Limit spinning of the globe – By default the map mimics a globe by allowing the user to infinitely scroll the map west or east. When the user is zoomed out, sometimes the map will render additional copies of the globe to fill in the blank space. This is great for most scenarios, but some developers prefer having a single copy of the globe that doesn’t scroll infinitely. Now this can be configured using the new renderWorldCopies map option.
  • Easily show all map styles in style picker – Up until now, if you wanted to show all map styles in the style picker control you had to list them all in an array in the mapStyles option. Now you simply set this option to “all.”
  • Image overlay georeferencing tools – When georeferencing an image to overlay on the map, sometimes all you have is some reference points (i.e. pixels to positions) which might not be the corners of the image. We added some functions which can be used to correctly georeference the image. We also added tools for reprojecting between pixels and positions relative to the image. For example, if you have an image of a floor plan displayed on the map, you can take any map position and determine its pixel coordinate on the original image and vice versa.
  • New spatial math functions – Several new spatial math functions have been added. One of the new spatial math functions we added will calculate the closest point to a location that falls on the edge of another geometry object. This has a lot of use cases, such as basic snapping of points to lines or simply knowing how far off the path something is.
  • Pitch touch support – You can now pitch the map using touch, with two-finger drag up/down.
  • Popup customizations – Up until now you could only have a popup with a white background and pointer arrow. Now you can set the color of the popup and optionally hide the pointer arrow. Popups can also be made draggable now too!
  • Shape and Data source events – New events for tracking changes to shapes and data sources.

Tile layers in the Android SDK

The Azure Maps team released an Android SDK into preview earlier this year. It is able to render point, line, and polygon data. The team has now added support for rendering tile layers. Tile layers are a great way to visualize large data sets on the map. Not only can a tile layer be generated from an image, but vector data can also be rendered as a tile layer too. By rendering vector data as a tile layer, the map control only needs to load the tiles which can be much smaller in file size than the vector data they represent. This technique is used by many who need to render millions of rows of data on the map.

Azure Maps Tile Layers in the Android SDK

Rendering tile layers within the Azure Maps Android SDK

We want to hear from you!

We are always working to grow and improve the Azure Maps platform and want to hear from you. We’re here to help and want to make sure you get the most out of the Azure Maps platform.

  • Have a feature request? Add it or vote up the request on our feedback site.
  • Having an issue getting your code to work? Have a topic you would like us to cover on the Azure blog? Ask us on the Azure Maps forums.
  • Looking for code samples or wrote a great one you want to share? Join us on GitHub.
  • To learn more, read the Azure Maps documentation.

IBM z15 mainframe, amps-up cloud, security features

The content below is taken from the original ( IBM z15 mainframe, amps-up cloud, security features), to continue reading please visit the site. Remember to respect the Author & Copyright.

IBM has rolled out a new generation of mainframes – the z15 – that not only bolsters the speed and power of the Big Iron but promises to integrate hybrid cloud, data privacy and security controls for modern workloads.

On the hardware side, the z15 mainframe systems ramp up performance and efficiency. For example IBM claims 14 percent more performance per core, 25 percent more system capacity, 25percent more memory, and 20 percent more I/O connectivity than the previous iteration, the z14 system. 

IBM also says the system can save customers 50 percent of costs over operating x86-based servers and use 40 percent less power than a comparable x86 server farm. And the z15 has the capacity to handle scalable environments such as supporting 2.4 million Docker containers on a single system.

To read this article in full, please click here

Time for another cuppa then? Tea-drinkers have better brains, say boffins with even better brains

The content below is taken from the original ( Time for another cuppa then? Tea-drinkers have better brains, say boffins with even better brains), to continue reading please visit the site. Remember to respect the Author & Copyright.

Mine’s a pint of oolong, please, love

Researchers from the National University of Singapore have found that drinking tea regularly really is good for you, especially your brain. They say they have also discovered why.…

Microsoft releases its first preview of PowerToys for Windows 10

The content below is taken from the original ( Microsoft releases its first preview of PowerToys for Windows 10), to continue reading please visit the site. Remember to respect the Author & Copyright.

If you’ve been a PC user since the days of Windows 95 and Windows XP, then you may recognize the name PowerToys Power Toys from a set of Microsoft-developed system utilities. After a few generations on the shelf, the concept has returned and now the first previ… prev…

Satellite connectivity expands reach of Azure ExpressRoute across the globe

The content below is taken from the original ( Satellite connectivity expands reach of Azure ExpressRoute across the globe), to continue reading please visit the site. Remember to respect the Author & Copyright.

Staying connected to access and ingest data in today’s highly distributed application environments is paramount for any enterprise. Many businesses need to operate in and across highly unpredictable and challenging conditions. For example, energy, farming, mining, and shipping often need to operate in remote, rural, or other isolated locations with poor network connectivity.

With the cloud now the de facto and primary target for the bulk of application and infrastructure migrations, access from remote and rural locations becomes even more important. The path to realizing the value of the cloud starts with a hybrid environment access resources with dedicated and private connectivity.

Network performance for these hybrid scenarios from rural and remote sites becomes increasingly critical. With globally connected organizations, the explosive number of connected devices and data in the Cloud, as well as emerging areas such as autonomous driving and traditional remote locations such as cruise ships are directly affected by connectivity performance.  Other examples requiring highly available, fast, and predictable network service include managing supply chain systems from remote farms or transferring data to optimize equipment maintenance in aerospace.

Today, I want to share the progress we have made to help customers address and solve these issues. Satellite connectivity addresses challenges of operating in remote locations.

Microsoft cloud services can be accessed with Azure ExpressRoute using satellite connectivity. With commercial satellite constellations becoming widely available, new solutions architectures offer improved and affordable performance to access Microsoft.

Infographic of High level architecture of ExpressRoute and satellite integration

Microsoft Azure ExpressRoute, with one of the largest networking ecosystems in the public Cloud now includes satellite connectivity partners bringing new options and coverage.

 8095 1SES will provide dedicated, private network connectivity from any vessel, airplane, enterprise, energy or government site in the world to the Microsoft Azure cloud platform via its unique multi-orbit satellite systems. As an ExpressRoute partner, SES will provide global reach and fibre-like high-performance to Azure customers via its complete portfolio of Geostationary Earth Orbit (GEO) satellites, Medium Earth Orbit (MEO) O3b constellation, global gateway network, and core terrestrial network infrastructure around the world.

 8095 2Intelsat’s customers are the global telecommunications service providers and multinational enterprises that rely on our services to power businesses and communities wherever their needs take them. Now they have a powerful new tool in their solutions toolkit. With the ability to rapidly expand the reach of cloud-based enterprises, accelerate customer adoption of cloud services, and deliver additional resiliency to existing cloud-connected networks, the benefits of cloud services are no longer limited to only a subset of users and geographies. Intelsat is excited to bring our global reach and reliability to this partnership with Microsoft, providing the connectivity that is essential to delivering on the expectations and promises of the cloud.

8095 3 Viasat, a provider of high-speed, high-quality satellite broadband solutions to businesses and commercial entities around the world, is introducing Direct Cloud Connect service to give customers expanded options for accessing enterprise-grade cloud services. Azure ExpressRoute will be the first cloud service offered to enable customers to optimize their network infrastructure and cloud investments through a secure, dedicated network connection to Azure’s intelligent cloud services.

Microsoft wants to help accelerate scenarios by optimizing the connectivity through Microsoft’s global network, one of the largest and most innovative in the world.

ExpressRoute for satellites directly connects our partners’ ground stations to our global network using a dedicated private link. But what does it more specifically mean to our customers?

  • Using satellite connectivity with ExpressRoute provides dedicated and highly available, private access directly to Azure and Azure Government clouds.
  • ExpressRoute provides predictable latency through well-connected ground stations, and, as always, maintains all traffic privately on our network – no traversing of the Internet.
  • Customers and partners can harness Microsoft’s global network to rapidly deliver data to where it’s needed or augment routing to best optimize for their specific need.
  • Satellite and a wide selection of service providers will enable rich solution portfolios for cloud and hybrid networking solutions centered around Azure networking services.
  • With some of the world’s leading broadband satellite providers as partners, customers can select the best solution based on their needs. Each of the partners brings different strengths, for example, choices between Geostationary (GEO), Medium Earth Orbit (MEO) and in the future Low Earth Orbit(LEO) satellites, geographical presence, pricing, technology differentiation, bandwidth, and others.
  • ExpressRoute over satellite creates new channels and reach for satellite broadband providers, through a growing base of enterprises, organizations and public sector customers.

With this addition to the ExpressRoute partner ecosystem, Azure customers in industries like aviation, oil and gas, government, peacekeeping, and remote manufacturing can deploy new use cases and projects that increase the value of their cloud investments and strategy.

As always, we are very interested in your feedback and suggestions as we continue to enhance our networking services, so I encourage you to share your experiences and suggestions with us.

You can follow these links to learn more about our partners Intelsat, SES, and Viasat, and learn more about Azure ExpressRoute from our website and our detailed documentation.

Microsoft takes ExpressRoute to orbit to sling Azure services at backwaters via satellite

The content below is taken from the original ( Microsoft takes ExpressRoute to orbit to sling Azure services at backwaters via satellite), to continue reading please visit the site. Remember to respect the Author & Copyright.

Your planet is one of those scheduled for demolition, er, we mean Redmond pals up with SES, Intelsat and Viasat

Microsoft has buddied up with a trio of satellite operators to hook up its Azure cloud to locations lacking connectivity.…

Sony’s headphone app will soon analyze your ears for 360 audio

The content below is taken from the original ( Sony’s headphone app will soon analyze your ears for 360 audio), to continue reading please visit the site. Remember to respect the Author & Copyright.

Back at CES, Sony unveiled 360 Reality Audio, a new standard/format/ecosystem for immersive sound on headphones and speakers. The headphone demo at CES was very technical and quite sensitive as calibrating sound profiles to your ears requir…

Southampton meeting – 10th September

The content below is taken from the original ( Southampton meeting – 10th September), to continue reading please visit the site. Remember to respect the Author & Copyright.

RISC OS users in the Southampton area will have their next opportunity to meet up and discuss the platform on Tuesday, 10th September from 7:00pm until 9:00pm at: The Sports Centre of Itchen College,Deacon Road,Southampton. There is no admission fee, so anyone with an interest is welcome to come along – and is welcome to […]

USB4 gets final approval, offers Ethernet-like speed

The content below is taken from the original ( USB4 gets final approval, offers Ethernet-like speed), to continue reading please visit the site. Remember to respect the Author & Copyright.

The USB Implementers Forum (USB-IF), the industry consortium behind the development of the Universal Serial Bus (USB) specification, announced this week it has finalized the technical specifications for USB4, the next generation of the spec.

One of the most important aspects of USB4 (they have dispensed with the space between the acronym and the version number with this release) is that it merges USB with Thunderbolt 3, an Intel-designed interface that hasn’t really caught on outside of laptops despite its potential. For that reason, Intel gave the Thunderbolt spec to the USB consortium.

Unfortunately, Thunderbolt 3 is listed as an option for USB4 devices, so some will have it and some won’t. This will undoubtedly cause headaches, and hopefully all device makers will include Thunderbolt 3.

To read this article in full, please click here