Turn Your Motorola Android Phone Into a Raspberry Pi

The content below is taken from the original (Turn Your Motorola Android Phone Into a Raspberry Pi), to continue reading please visit the site. Remember to respect the Author & Copyright.

Turn Your Motorola Android Phone Into a Raspberry Pi

In the surest sign that hardware hacking is the new hotness, Motorola and Farnell/Element 14 have developed an add-on board and SDK that will let you connect virtually anything to your mobile phone. Motorola is calling it the “Moto Mods” system, and it looks like its going to be a dedicated microcontroller that interfaces with the computer inside the phone and provides everything from GPIOs to DSI (video). Naturally, I2C, I2S, SPI, UART, even two flavors of USB are in the mix.

dev-config-diagram-5

The official SDK, ahem Mods Development Kit (MDK), is based on the open Greybus protocol stack (part of Google’s Project Ara open phone project) and it’s running on an ARM Cortex-M4F chip. It’s likely to be itself fairly hackable, and even if the suggested US $125 price is probably worth it for the convenience, we suspect that it’ll be replicable with just a few dollars in parts and the right firmware. (Yes, that’s a challenge.)

The initial four adapter boards range from a simple breadboard to a Raspberry-Pi-hat adapter (hence the title). It’s no secret that cell phones now rival the supercomputers of a bygone era, but they’ve always lacked peripheral interfaces. We wish that all of the old smartphones in our junk box had similar capabilities. What do you say? What would you build with a cellphone if you could break out all sorts of useful comms?

Via HackerBoards, and thanks to [Tom] for the tip!

Software-defined storage hits the bargain rack

The content below is taken from the original (Software-defined storage hits the bargain rack), to continue reading please visit the site. Remember to respect the Author & Copyright.

Some small and medium-sized businesses need fast, and flexible storage gear as much as large enterprises. The need to quickly spin up new applications, even without a storage specialist on staff, can drive those demands. The gear for doing so is gradually getting more affordable.

On Monday, Hewlett-Packard Enterprise extended two of its storage product lines into more affordable territory, in one case adopting an ARM processor to help cut the cost of a system.

HPE says the new systems give smaller organizations a way in on two of the hottest trends in enterprise storage: software-defined storage and flash. The former helps to line up the right storage for each application, even as a company’s demands quickly change, while the latter can give a speed boost to any type of storage arrangement.

To put storage under software control, HPE launched its StoreVirtual arrays in 2014. There are now in about 200,000 deployments worldwide, the company says. StoreVirtual systems can provide shared storage capacity alongside HPE ProLiant servers and hyperconverged appliances, using the company’s Synergy software.

Up to now, typical StoreVirtual systems have been mult-terabyte systems costing tens of thousands of dollars. On Monday, HPE introduced the StoreVirtual 3200 Storage, with capacities starting at 1.2TB and a street price starting at US$6,055.

20160815 hpe storevirtual 3200 storage array Hewlett Packard Enterprise

Hewlett Packard Enterprise introduced the HPE StoreVirtual 3200 storage array on Aug. 15, 2016.

HPE says the 3200 is a way for SMBs to get a foot in the door with software-defined storage, consolidate workloads and gradually migrate to the new type of infrastructure over time. Beyond that base price and configuration, the new system stays cheaper even in a more typical arrangement like a two-node system with 14TB of capacity, HPE says. That system would be less than half the cost of the current StoreVirtual 4000 model with a similar configuration, the company says.

Part of the reason is that the processor at the heart of the 3200 uses an ARM microarchitecture rather than the x86 technology used in most other enterprise data-center gear. The ARM chip, which comes from AppliedMicro, delivered the computing power the company needed at a lower price than an x86 processor, said Brad Parks, HPE’s director of go-to-market strategy for storage. It might be the first of many used in HPE storage gear, though the company is only beginning to explore this approach, he said.

20160815 hpe msa 2042 storage array Hewlett Packard Enterprise

Hewlett Packard Enterprise introduced the HPE MSA 2042 storage array on Aug. 15, 2016.

Also on Monday, HPE introduced the MSA 2042, a new member of its MSA line of arrays that includes 800GB of SSD (solid-state drive) capacity and flash storage software as standard features. The flash can be used as a read cache accelerator or as a read and write performance tier, with automatic tiering software included. That hardware and software has been optional on MSA systems, but at an additional cost of about $7,500, Parks said. In the 2042, which is priced starting at $9,877, they are included at no extra cost.

Both the StoreVirtual 3200 and the MSA 2042 are available immediately worldwide.

NVIDIA brings desktop-class graphics to laptops

The content below is taken from the original (NVIDIA brings desktop-class graphics to laptops), to continue reading please visit the site. Remember to respect the Author & Copyright.

With the GeForce GTX 1080, NVIDIA pushed the boundaries of what a $600 graphics card can do. That flagship card was joined by the GTX 1070 and GTX 1060, two lower-power cards based on the same 16nm Pascal architecture at a much more affordable price. Now, it’s bringing mobile versions of those cards that match their desktop counterparts in almost every area — including being VR ready.

That’s not hyperbole. The top-of-the-line 1080M has 2,560 CUDA cores and 8GB of 10Gbps GDDR5x memory. The desktop chip has the same. The only difference is clock speed: it’s set at 1,556MHz, while the desktop version is 1,607MHz. The two do share the same boost clock (1,733MHz) though, and both have access to all the new technology introduced for the Pascal architecture. That means simultaneous multi-projection, VRWorks, Ansel and the rest.

If you want an idea what those specs translate to in real-world performance, how’s this: when paired with an i7-6700HQ (a quad-core 2.6GHz chip with 3.5GHz turbo), Mirror’s Edge Catalyst, 126; Overwatch, 147; Doom, 145; Metro Last Light, 130; Rise of the Tomb Raider, 125. Those are the 1080M’s FPS figures when playing at 1080p with "ultra" settings at 120Hz. NVIDIA is really pushing 120Hz gaming, and many of the first crop of Pascal laptops will have 120Hz G-Sync displays.

4K gaming, too, is more than possible. At 4K with "high" settings the same setup can push 89FPS on Overwatch, 70FPS with Doom, and 62FPS with Metro Last Light (according to NVIDIA). Only Mirror’s Edge Catalyst and Rise of the Tomb Raider fall short of 60FPS, both clocking in at a very playable 52FPS. At the chip’s UK unveil, NVIDIA showed the new Gears of War playing in 4K in real-time, and there were absolutely no visible frame drops. With figures like that, it goes without saying that VR will be no problem for the 1080M. The desktop GTX 980 is the benchmark for both the HTC Vive and Oculus Rift, and the 1080M blows it away. If you’re looking for more performance, the 1080M supports overclocking of course — NVIDIA suggests as high as 300MHz — and you can expect laptops sporting two in an SLI configuration soon.

The major drawback for the 1080M is power. We don’t know its exact TDP yet, but given the near-identical desktop version runs at 180W, you’d imagine it’s got to be at least 150W. NVIDIA has tech that counters that heavy power load when you’re not plugged in, of course. Chief among these is BatteryBoost, which allows you to set a framerate (i.e. 30FPS), and downclocks the GPU appropriately to save power — if your card is capable of pushing 147FPS plugged in, that’s going to be a fair amount of power saved. Whatever the battery savings possible, though, it won’t change the fact that the 1080M is only going to slide into big laptops.

That’s fine for those already used to carrying around behemoths on the go, but plenty of gamers prefer something more portable. Enter the 1070M. NVIDIA says this chip will fit into any chassis that currently handles the 980M, which covers a lot of laptops.

Just like the 1080M, the 1070M matches its desktop sibling in many ways. You’ve actually got slightly more in the way of CUDA cores — 2,048 vs. the desktop’s 1,920, but again they’re clocked slower (1,442MHz vs. 1,506MHz). Memory is the same — 8GB 8Gbps GDDR5 — and it too benefits from both the Pascal architecture itself and the new software features that come with it.

GTX 1080 GTX 1080M GTX 1070 GTX 1070M
CUDA cores 2,560 2,560 1,920 2,048
Base clock 1,607MHz 1,556MHz 1,506MHz 1,442MHz
Boost clock 1,733MHz 1,733MHz 1,683MHz 1,645MHz
Memory 8GB GDDR5X 8GB GDDR5X 8GB GDDR5 8GB GDDR5
Memory speed 10Gbps 10Gbps 8Gbps 8Gbps
Memory Bandwidth 320GB/sec 320GB/sec 256GB/sec 256GB/sec

When faced off against the desktop 1070, the 1070M holds its own. In nearly every test we saw, it got within a couple of percentiles of the desktop card. We’re talking 77FPS in The Witcher 3 (1080p maxed settings, no HairWorks) vs. 79.7FPS on the 1070; 76.2FPS in The Division (1080p ultra) vs. 76.6FPS; and 64.4FPS in Crysis 3 (1080p very high) vs. 66.4FPS. The one outlier was Grand Theft Auto V, which dropped down to 65.3FPS vs. 73.7FPS on the desktop 1070. 4K gaming is a stretch on the desktop 1070, and that carries over here, but this card is more-than VR ready. NVIDIA says that it’ll support factory overclocking on the 1070M soon, so you may see laptops offering a little more grunt "in a couple of months."

Rounding off the lineup is the 1060M, the mobile version of NVIDIA’s $249 "budget" VR-ready card. It’s something of the exception to the rule here. Yes, it offers 1,280 CUDA cores and 6GB 8Gbps GDDR5 memory, which is equal to the desktop 1060. But at the lower end of the range the fact that they’re clocked lower (1,404MHz vs. 1,506MHz) hurts performance quite a bit more. In side-by-side comparisons, NVIDIA’s benchmarks suggest you’ll get within ten percent or so of the desktop card. That’s not to say that the 1060M is a slouch. For traditional gaming, you’re not going to hit 60FPS at 1080P in every game without thinking about settings, but if you can play it on a desktop GTX 980, it’s probably a safe bet that the 1060M can handle it. That’s insanely impressive when you consider that the 1060M will fit into the same chassis as the 970M — think "ultra portable" gaming laptops.

GTX 1060M GTX 1060 GTX 980
CUDA cores 1,280 1,280 2,048
Base clock 1,404MHz 1,506MHz 1,126MHz
Boost clock 1,670MHz 1,708MHz 1,216MHz
Memory 6GB GDDR5* 6GB GDDR5 4GB GDDR5
Memory speed 8Gbps 8Gbps 7Gbps
Memory Bandwidth 192GB/sec 192GB/sec 224GB/sec

*Up to

In reality, the 10-percent gap between the 1060 and the 1060M probably makes it slightly slower than the GTX 980, but the difference is almost negligible. I wasn’t able to push the 1060M too hard on the "VR ready" promise — you can read about the demo and why the 1060M matters in a separate article — but the demo I had was solid. And really, being able to plug an Oculus into something as slim as a Razer Blade was unthinkable a few months ago, so it’s probably best not to complain.

Acer, Alienware, Asus, Clevo, EVGA, HP, Gigabyte, Lenovo, MSI, Origin, Razer, Sager and XMG are just some of the OEMs signed up to make laptops with the new Pascal chips. Many will announce updated and all-new models today, while some might hold off a while. But expect lots of super-powerful, VR-ready gaming laptops very soon.

Microsoft Merges Its Authenticator Apps Into One, Adds One-Button Approval

The content below is taken from the original (Microsoft Merges Its Authenticator Apps Into One, Adds One-Button Approval), to continue reading please visit the site. Remember to respect the Author & Copyright.

Android/iOS: Microsoft has had separate two-factor authenticator apps for its consumer and enterprise users for a while. Now, it’s combining the two into the new Microsoft Authenticator and adding a few new features while they’re at it.

The new combined app will feature one-button approval, one of the more convenient and safe ways to secure your account. If your device has a fingerprint sensor, you can also use that to approve an authentication request. The app is rolling out for consumers as an update to the Azure Authenticator packages below.

Microsoft Authenticator | Google Play Store via Android Police

Microsoft Authenticator | iTunes App Store

How to link Windows 10 license to Microsoft Account

The content below is taken from the original (How to link Windows 10 license to Microsoft Account), to continue reading please visit the site. Remember to respect the Author & Copyright.

HPE Buying Supercomputer Specialist SGI for $275M

The content below is taken from the original (HPE Buying Supercomputer Specialist SGI for $275M), to continue reading please visit the site. Remember to respect the Author & Copyright.

(Bloomberg) — Hewlett Packard Enterprise is buying Silicon Graphics International for about $275 million in cash, adding high-performance computing capabilities that improve data analytics.

HPE expects the deal to be neutral to earnings in the first full year and will add to profit thereafter, the companies said Thursday in a statement. SGI, whose machines helped create advanced computer-generated images for Hollywood movies in the 1990s, brings products that aid customers with computing, data analytics and data management.

“At HPE, we are focused on empowering data-driven organizations,” said Antonio Neri, the company’s executive VP and general manager for the enterprise group, in the statement. Its technology will “complement HPE’s proven data center solutions designed to create business insight.”

CEO Meg Whitman is investing in products that help customers crunch growing reams of data. The high performance computing industry, an $11 billion market, is expected to have a compound annual growth rate of 6 percent to 8 percent in the next few years, according to industry researcher IDC, the company said.

SGI’s operating assets were sold to Rackable Systems in 2009 for $42.5 million. Rackable took on SGI as its global name and brand. Shares in SGI surged 28 percent in late trading in New York to $7.65. They had fallen more than 70 percent from a peak in August 2013.

The agreement with HPE will support private and public sector customers, including US federal agencies as well as companies.

The two companies are complementary, said Jorge Titinger, CEO and president of SGI, in the statement. “This combination addresses today’s complex business problems that require applying data analytics and tools to securely process vast amounts of data.”

The deal, whose per-share price is $7.75, is expected to close in the first quarter of HPE’s fiscal year 2017.

See also: HPE Wants to Give Data Center Automation a Fresh Start

Testing PowerShell with Pester

The content below is taken from the original (Testing PowerShell with Pester), to continue reading please visit the site. Remember to respect the Author & Copyright.

PowerShell-Text-Purple-hero

PowerShell-Text-Purple-hero

If you are an experienced PowerShell user, chances are you have heard of Pester. This is an open source project that Microsoft started shipping as part of Windows 10. I’m not going to try and teach Pester here, although it really isn’t that difficult to pick up. But I wanted to show you some ways to use Pester that you might not have considered.

Pester is typically designed for software testing. You build a test script to run through different parts of your code and Pester validates it. This is a quick way to verify you haven’t broken something while introducing something new.

A traditional Pester test

A traditional Pester test (Image Credit: Jeff Hicks)

But there’s no reason we can’t use the Pester logic to test other things. Perhaps that status of a critical server. The centerpiece of Pester is a logical test of “If some condition meets some test it should be some value”. It’s not that difficult to write a test that says “the DNS service should be running.” Here’s a simple Pester test to validate the state of my primary Hyper-V server.

#requires -version 5.0

$computername = "CHI-P50"

Describe $Computername {

It "should have Hyper-V Feature installed" {
    Get-windowsFeature -Name Hyper-V -ComputerName $Computername | Should Be $True
}

It "Hyper-V service should be running" {
    $s = Get-Service -Name vmms -ComputerName $computername
    $s.status | Should Be "running"
}

It "DNS service should be running" {
    $s = Get-Service -Name dns -ComputerName $computername
    $s.status | Should Be "running"
}

It "Should have 25% free space on drive C:" {
    $c = Get-CimInstance -ClassName Win32_LogicalDisk -Filter "deviceid = 'c:'" -ComputerName $computername
    ($c.FreeSpace/$c.size)*100 | Should BeGreaterThan 25
}

It "Should have 10% free space on drive E:" {
    $e = Get-CimInstance -ClassName Win32_LogicalDisk -Filter "deviceid = 'e:'" -ComputerName $computername
    ($e.FreeSpace/$e.size)*100 | Should BeGreaterThan 10
}

}

I could run this as a regular PowerShell script. But I prefer to use the Invoke-Pester cmdlet.

Invoking a Pester test script

Invoking a Pester test script (Image Credit: Jeff Hicks)

By using Invoke-Pester I can pass the results to the pipeline, output the results to XML and even specify what tests to run if I’ve named any of my tests. The benefit of using Pester is that you can automate the process running the test and taking action should there be any failures.

To test this, I’ll modify one of my tests so that it will result in a failure.

A Pester Test Failure

A Pester test failure (Image Credit: Jeff Hicks)

The failure is pretty easy to pick out. I also used the -Passthru parameter so you can see what kind of output to expect. I can then automate code like this which will email me failures.

Invoke-Pester c:\scripts\pester-chi-p50.ps1 -PassThru | 
Select -ExpandProperty TestResult | Where {-not $_.passed} |
foreach {
    Send-MailMessage -Subject "$($_.Describe) Test Failure" -body ($_ | Out-String)
}

Failure email notice

Failure email notice (Image Credit: Jeff Hicks)

I think that’s pretty slick. But it gets even better.

Since we’re talking PowerShell, you can use it to dynamically build your Pester tests. Here’s a test file that dynamically generates the same tests but with different expectations per server.

#use pester to validate servers or other infrastructure

<#
 Read in data to test per server. Could be read from an XML file

 Invoke-Pester <this file>
#>

$all = [pscustomobject]@{
Computername = "CHI-P50"
Services = @{Running = "vmms","vmcompute"},@{Stopped= "RemoteRegistry","Spooler"}
Features = @{Installed = "Hyper-V","Containers","Windows-Server-Backup"},@{NotInstalled = "Direct-Play","Internet-Print-Client"}
Versions = @{PowerShell = 5; Windows = 2016}
},
[pscustomobject]@{
Computername = "CHI-DC04"
Services = @{Running ="DNS","ADWS","KDC","NetLogon"},@{Stopped= "RemoteRegistry","Spooler"}
Features = @{Installed = "DNS","AD-Domain-Services","Windows-Server-Backup"},@{NotInstalled = "SMTP-Server","Internet-Print-Client"}
Versions = @{PowerShell = 5; Windows = 2012}
},
[pscustomobject]@{
Computername = "CHI-HVR2"
Services = @{Running ="vmms"},@{Stopped= "RemoteRegistry"}
Features = @{Installed = "Hyper-V","Windows-Server-Backup"},@{NotInstalled = "SMTP-Server","Internet-Print-Client"}
Versions = @{PowerShell = 4; Windows = 2012}
}

foreach ($item in $all) {

    Describe $($item.Computername) -Tags $item.Computername {

        $computername = $($item.Computername)
        $ps = New-PSSession -ComputerName $computername
        $cs = New-CimSession -ComputerName $computername
    
        It "Should be pingable" {
            Test-Connection -ComputerName $computername -Count 2 -Quiet | Should Be $True
        }

        It "Should respond to Test-WSMan" {
            {Test-WSMan -ComputerName $computername -ErrorAction Stop} | Should Not Throw
        }

    Context Features {

    $installed = Get-WindowsFeature -ComputerName $computername | Where Installed
    $Features = $($item.Features.Installed)  
    foreach ($feature in $features) {

       It "Should have $feature installed" {
            $installed.Name -contains $feature | Should Be $True
        }
    
    }

    $NotFeatures = $($item.features.notinstalled)
            foreach ($feature in $Notfeatures) {

               It "Should NOT have $feature installed"  {
                   $installed.Name -contains $feature | Should Be $False
                }
    
             }
    } #features

    Context Services  {

    $Stopped = $($item.services.Stopped)
    $Running = $($item.services.Running)

    $all = Invoke-Command { Get-Service } -session $ps
    foreach ($item in $Stopped) {
  
      It "Service $item should be stopped" {
        $all.where({$_.name -eq $item}).status | Should Be "Stopped"
      }

    }

    foreach ($item in $Running) {
  
      It "Service $item should be running" {
        $all.where({$_.name -eq $item}).status | Should Be "Running"
      }

    }

    } #services

    Context Versions {
        $winVer = $($item.versions.Windows)
        It "Should be running Windows Server $winVer" {
            (Get-CimInstance win32_operatingsystem -cimSession $cs).Caption | Should BeLike "*$winver*"
        }

        $psver = $($item.versions.powershell)
        It "Should be running PowerShell version $psver" {
            Invoke-Command { $PSVersionTable.psversion.major } -session $ps | Should be $psver
        }
    } #versions

    Context Other {
        It "Security event log should be at least 16MB in size" {
            ($cs | Get-CimInstance -ClassName win32_NTEVentlogFile -filter "LogFileName = 'Security'").FileSize | Should beGreaterThan 16MB
        }
        
        It "Should have C:\Temp folder" {
            Invoke-Command {Test-Path C:\Temp} -session $ps | Should Be $True
        }    
    } #other

    $ps | Remove-PSSession
    $cs | Remove-CimSession

    } #describe

} #foreach

I’ve hard coded the input values, but you could just as easily input them from an external source such as XML.

Testing server infrastructure with Pester

Testing server infrastructure with Pester (Image Credit: Jeff Hicks)

And, of course, I could build a response script to take remedial action for failures. Depending on what you are testing, if you configured the server with DSC, you have similar testing options. Although with Pester I can test for more intangible items like free disk space or free memory.

Sponsored

Or, here’s a simple Pester file for testing Active Directory and my domain controllers.

#requires -version 5.0
#requires -Module ActiveDirectory, DNSClient


<#

Use Pester to test Active Directory

Last updated: July 5, 2016

#>


$myDomain = Get-ADDomain
$DomainControllers = $myDomain.ReplicaDirectoryServers
$GlobalCatalogServers = (Get-ADForest).GlobalCatalogs

Write-Host "Testing Domain $($myDomain.Name)" -ForegroundColor Cyan
Foreach ($DC in $DomainControllers) {

    Describe $DC {

        Context Network {
            It "Should respond to a ping" {
                Test-Connection -ComputerName $DC -Count 2 -Quiet | Should Be $True
            }

            #ports
            $ports = 53,389,445,5985,9389
            foreach ($port in $ports) {
                It "Port $port should be open" {
                #timeout is 2 seconds
                [system.net.sockets.tcpclient]::new().ConnectAsync($DC,$port).Wait(2000) | Should Be $True
                }
            }

            #test for GC if necessary
            if ($GlobalCatalogServers -contains $DC) {
                It "Should be a global catalog server" {
                    [system.net.sockets.tcpclient]::new().ConnectAsync($DC,3268).Wait(2000) | Should Be $True
                }
            }
            
            #DNS name should resolve to same number of domain controllers
            It "should resolve the domain name" {
             (Resolve-DnsName -Name globomantics.local -DnsOnly -NoHostsFile | Measure-Object).Count | Should Be $DomainControllers.count
            }
        } #context
    
        Context Services {
            $services = "ADWS","DNS","Netlogon","KDC"
            foreach ($service in $services) {
                It "$Service service should be running" {
                    (Get-Service -Name $Service -ComputerName $DC).Status | Should Be 'Running'
                }
            }

        } #services

        Context Disk {
            $disk = Get-WmiObject -Class Win32_logicaldisk -filter "DeviceID='c:'" -ComputerName $DC
            It "Should have at least 20% free space on C:" {
                ($disk.freespace/$disk.size)*100 | Should BeGreaterThan 20
            }
            $log = Get-WmiObject -Class win32_nteventlogfile -filter "logfilename = 'security'" -ComputerName $DC
            It "Should have at least 10% free space in Security log" {
                ($log.filesize/$log.maxfilesize)*100 | Should BeLessThan 90
            }
        }
    } #describe

} #foreach

Describe "Active Directory" {

    It "Domain Admins should have 5 members" {
        (Get-ADGroupMember -Identity "Domain Admins" | Measure-Object).Count | Should Be 5
    }
    
    It "Enterprise Admins should have 1 member" {
        (Get-ADGroupMember -Identity "Enterprise Admins" | Measure-Object).Count | Should Be 1
    }

    It "The Administrator account should be enabled" {
        (Get-ADUser -Identity Administrator).Enabled | Should Be $True
    }

    It "The PDC emulator should be $($myDomain.PDCEmulator)" {
      (Get-WMIObject -Class Win32_ComputerSystem -ComputerName $myDomain.PDCEmulator).Roles -contains "Primary_Domain_Controller" | Should Be $True
    }
}

Testing Active Directory with Pester

Testing Active Directory with Pester (Image Credit: Jeff Hicks)

As you can see, I have some disk space issues to sort out.

Sponsored

Pester is a skill I think every PowerShell professional needs to begin developing. Start with simple tests for your modules. Once you gain a better understanding of how to construct effective tests, you’ll realize there are many things you can test and your investment in learning PowerShell continues to pay off.

The post Testing PowerShell with Pester appeared first on Petri.

A computer program that can replicate your handwriting

The content below is taken from the original (A computer program that can replicate your handwriting), to continue reading please visit the site. Remember to respect the Author & Copyright.

Handwriting is a skill that feels personal and unique to all of us. Everyone has a slightly different style — a weird quirk or a seemingly illegible scrawl — that’s nearly impossible for a computer to replicate, especially as our own penmanship fluctuates from one line to the next. A team at University College London (UCL) is getting pretty close, however, with a new system it’s calling "My Text in Your Handwriting." A custom algorithm is able to scan what you’ve written on a piece of paper and then reproduce your style, to an impressive degree, using whatever words you wish.

To capture your scrawl, the team will ask you to write on four A4-sized sheets of paper (as little as one paragraph can deliver passable results, however). The text is then scanned and converted into a thin, skeletal line. It’s broken down by a computer and a human moderator, assigning letters and their position within a word. They also look for "splits," where the line changes from a letter into a "ligature," — the extra bits you need for joined-up handwriting. Finally, there are "links" which indicate that two separate marks are part of the same letter, for instance when crossing a "t."

The algorithm then works to replicate your handwriting style by referencing and adapting your previously scanned examples. You will have written the same letter on a number of different occasion, so the computer will look for the one that works best for the word or phrase it’s trying to sketch out. A degree of randomness is then applied to ensure that the same letters and combinations aren’t used more than once (an easy way for humans to figure out if a computer has written something).

Once your written examples or "glyphs" have been selected, the computer will figure out the appropriate spacing in between each letter. The height of each character and where it sits on the line is also taken into consideration. Finally, the "ligatures" are added to the computer-generated piece, along with some basic texturing to mimic the pen and ink you were using.

The results are fairly believable. As an experiment, the team asked a group to decide which envelopes — all seemingly handwritten — were produced by a computer. They chose incorrectly 40 percent of the time.

"Up until now, the only way to produce computer-generated text that resembles a specific person’s handwriting would be to use a relevant font," Dr Oisin Mac Aodha, a member of the UCL team said. "The problem with such fonts is that it is often clear that the text has not been penned by hand, which loses the character and personal touch of a handwritten piece of text. What we’ve developed removes this problem and so could be used in a wide variety of commercial and personal circumstances."

The ability to scan and interpret handwriting isn’t new — plenty of apps let you sketch with a stylus or finger, and then convert this into text. Similarly, it’s possible for software to reproduce digital text in a variety of seemingly human, handwritten styles. But the ability to reproduce your personal penmanship — with words and sentences you might not have shown the computer — is unprecedented. It could be used to help elderly people who are starting to lose their writing ability, or translate handwritten text into new languages while keeping the personality of the author.

If you’re wondering if this sort of technology could be used to forge signatures and documents, the answer is yes, it’s possible. The team at UCL has stressed, however, that their system works both ways, meaning it could be used by law enforcers to spot computer-aided forgeries too. Still, it’s best to be wary the next time someone tries to sell you an autograph.

Via: BBC

Source: UCL, My Text In Your Handwriting (Paper)

Microsoft to drop Azure RemoteApp in favor of Citrix virtualization technologies

The content below is taken from the original (Microsoft to drop Azure RemoteApp in favor of Citrix virtualization technologies), to continue reading please visit the site. Remember to respect the Author & Copyright.

http://zd.net/2bpgjWs

HP Enterprise bought SGI. RIP SGI

The content below is taken from the original (HP Enterprise bought SGI. RIP SGI), to continue reading please visit the site. Remember to respect the Author & Copyright.

http://bit.ly/2bpgyAN

New – AWS Application Load Balancer

The content below is taken from the original (New – AWS Application Load Balancer), to continue reading please visit the site. Remember to respect the Author & Copyright.

We launched Elastic Load Balancing (ELB) for AWS in the spring of 2009 (see New Features for Amazon EC2: Elastic Load Balancing, Auto Scaling, and Amazon CloudWatch to see just how far AWS has come since then). Elastic Load Balancing has become a key architectural component for many AWS-powered applications. In conjunction with Auto Scaling, Elastic Load Balancing greatly simplifies the task of building applications that scale up and down while maintaining high availability.

On the Level
Per the well-known OSI model, load balancers generally run at Layer 4 (network) or Layer 7 (application).

A Layer 4 load balancer works at the network protocol level and does not look inside of the actual network packets, remaining unaware of the specifics of HTTP and HTTPS. In other words, it balances the load without necessarily knowing a whole lot about it.

A Layer 7 load balancer is more sophisticated and more powerful. It inspects packets, has access to HTTP and HTTPS headers, and (armed with more information) can do a more intelligent job of spreading the load out to the target.

Application Load Balancing for AWS
Today we are launching a new Application Load Balancer option for ELB. This option runs at Layer 7 and supports a number of advanced features. The original option (now called a Classic Load Balancer) is still available to you and continues to offer Layer 4 and Layer 7 functionality.

Application Load Balancers support content-based routing, and supports applications that run in containers. They support a pair of industry-standard protocols (WebSocket and HTTP/2) and also provide additional visibility into the health of the target instances and containers. Web sites and mobile apps, running in containers or on EC2 instances, will benefit from the use of Application Load Balancers.

Let’s take a closer look at each of these features and then create a new Application Load Balancer of our very own!

Content-Based Routing
An Application Load Balancer has access to HTTP headers and allows you to route requests to different backend services accordingly. For example, you might want to send requests that include /api in the URL path to one group of servers (we call these target groups) and requests that include /mobile to another. Routing requests in this fashion allows you to build applications that are composed of multiple microservices that can run and be scaled independently.

As you will see in a moment, each Application Load Balancer allows you to define up to 10 URL-based rules to route requests to target groups. Over time, we plan to give you access to other routing methods.

Support for Container-Based Applications
Many AWS customers are packaging up their microservices into containers and hosting them on Amazon EC2 Container Service. This allows a single EC2 instance to run one or more services, but can present some interesting challenges for traditional load balancing with respect to port mapping and health checks.

The Application Load Balancer understands and supports container-based applications. It allows one instance to host several containers that listen on multiple ports behind the same target group and also performs fine-grained, port-level health checks

Better Metrics
Application Load Balancers can perform and report on health checks on a per-port basis. The health checks can specify a range of acceptable HTTP responses, and are accompanied by detailed error codes.

As a byproduct of the content-based routing, you also have the opportunity to collect metrics on each of your microservices. This is a really nice side-effect that each of the microservices can be running in its own target group, on a specific set of EC2 instances. This increased visibility will allow you to do a better job of scaling up and down in response to the load on individual services.

The Application Load Balancer provides several new CloudWatch metrics including overall traffic (in GB), number of active connections, and the connection rate per hour.

Support for Additional Protocols & Workloads
The Application Load Balancer supports two additional protocols: WebSocket and HTTP/2 (formerly known as SPDY).

WebSocket allows you to set up long-standing TCP connections between your client and your server. This is a more efficient alternative to the old-school method which involved HTTP connections that were held open with a “heartbeat” for very long periods of time. WebSocket is great for mobile devices and can be used to deliver stock quotes, sports scores, and other dynamic data while minimizing power consumption. ALB provides native support for WebSocket via the ws:// and wss:// protocols.

HTTP/2 is a significant enhancement of the original HTTP 1.1 protocol. The newer protocol feature supports multiplexed requests across a single connection. This reduces network traffic, as does the binary nature of the protocol.

The Application Load Balancer is designed to handle streaming, real-time, and WebSocket workloads in an optimized fashion. Instead of buffering requests and responses, it handles them in streaming fashion. This reduces latency and increases the perceived performance of your application.

Creating an ALB
Let’s create an Application Load Balancer and get it all set up to process some traffic!

The Elastic Load Balancing Console lets me create either type of load balancer:

I click on Application load balancer, enter a name (MyALB), and choose internet-facing. Then I add an HTTPS listener:

On the same screen, I choose my VPC (this is a VPC-only feature) and one subnet in each desired Availability Zone, tag my Application Load Balancer, and proceed to Configure Security Settings:

Because I created an HTTPS listener, my Application Load Balancer needs a certificate. I can choose an existing certificate that’s already in IAM or AWS Certificate Manager (ACM),  upload a local certificate, or request a new one:

Moving right along, I set up my security group. In this case I decided to create a new one. I could have used one of my existing VPC or EC2 security groups just as easily:

The next step is to create my first target group (main) and to set up its health checks (I’ll take the defaults):

Now I am ready to choose the targets—the set of EC2 instances that will receive traffic through my Application Load Balancer. Here, I chose the targets that are listening on port 80:

The final step is to review my choices and to Create my ALB:

After I click on Create the Application Load Balancer is provisioned and becomes active within a minute or so:

I can create additional target groups:

And then I can add a new rule that routes /api requests to that target:

Application Load Balancers work with multiple AWS services including Auto Scaling, Amazon ECS, AWS CloudFormation, AWS CodeDeploy, and AWS Certificate Manager (ACM).

Moving on Up
If you are currently using a Classic Load Balancer and would like to migrate to an Application Load Balancer, take a look at our new Load Balancer Copy Utility. This Python tool will help you to create an Application Load Balancer with the same configuration as an existing Classic Load Balancer. It can also register your existing EC2 instances with the new load balancer.

Availability & Pricing
The Application Load Balancer is available now in all commercial AWS regions and you can start using it today!

The hourly rate for the use of an Application Load Balancer is 10% lower than the cost of a Classic Load Balancer.

When you use an Application Load Balancer, you will be billed by the hour and for the use of Load Balancer Capacity Units, also known as LCU’s. An LCU measures the number of new connections per second, the number of active connections, and data transfer. We measure on all three dimensions, but bill based on the highest one. One LCU is enough to support either:

  • 25 connections/second with a 2 KB certificate, 3,000 active connections, and 2.22 Mbps of data transfer or
  • 5 connections/second with a 4 KB certificate, 3,000 active connective, and 2.22 Mbps of data transfer.

Billing for LCU usage is fractional, and is charged at $0.008 per LCU per hour. Based on our calculations, we believe that virtually all of our customers can obtain a net reduction in their load balancer costs by switching from a Classic Load Balancer to an Application Load Balancer.

Jeff;

 

 

 

Now Available – IPv6 Support for Amazon S3

The content below is taken from the original (Now Available – IPv6 Support for Amazon S3), to continue reading please visit the site. Remember to respect the Author & Copyright.

As you probably know, every server and device that is connected to the Internet must have a unique IP address. Way back in 1981, RFC 791 (“Internet Protocol”) defined an IP address as a 32-bit entity, with three distinct network and subnet sizes (Classes A, B, and C – essentially large, medium, and small) designed for organizations with requirements for different numbers of IP addresses. In time, this format came to be seen as wasteful and the more flexible CIDR (Classless Inter-Domain Routing) format was standardized and put in to use. The 32-bit entity (commonly known as an IPv4 address) has served the world well, but the continued growth of the Internet means that all available IPv4 addresses will ultimately be assigned and put to use.

In order to accommodate this growth and to pave the way for future developments, networks, devices, and service providers are now in the process of moving to IPv6. With 128 bits per IP address, IPv6 has plenty of address space (according to my rough calculation, 128 bits is enough to give 3.5 billion IP addresses to every one of the 100 octillion or so stars in the universe). While the huge address space is the most obvious benefit of IPv6, there are other more subtle benefits as well. These include extensibility, better support for dynamic address allocation, and additional built-in support for security.

Today I am happy to announce that objects in Amazon S3 buckets are now accessible via IPv6 addresses via new “dual-stack” endpoints. When a DNS lookup is performed on an endpoint of this type, it returns an “A” record with an IPv4 address and an “AAAA” record with an IPv6 address. In most cases the network stack in the client environment will automatically prefer the AAAA record and make a connection using the IPv6 address.

Accessing S3 Content via IPv6
In order to start accessing your content via IPv6, you need to switch to new dual-stack endpoints that look like this:

http://bit.ly/2bkgphX

or this:

http://bit.ly/2aZJIkR

If you are using the AWS Command Line Interface (CLI) or the AWS Tools for Windows PowerShell you can use the --enabledualstack flag to switch to the dual-stack endpoints.

We are currently updating the AWS SDKs to support the use_dualstack_endpoint setting and expect to push them out to production by the middle of next week. Until then, refer to the developer guide for your SDK to learn how to enable this feature.

Things to Know
Here are some things that you need to know in order to make a smooth transition to IPv6:

Bucket and IAM Policies – If you use policies to grant or restrict access via IP address, update them to include the desired IPv6 ranges before you switch to the new endpoints. If you don’t do this, clients may incorrectly gain or lose access to the AWS resources. Update any policies that exclude access from certain IPv4 addresses by adding the corresponding IPv6 addresses.

IPv6 Connectivity – Because the network stack will prefer an IPv6 address to an IPv4 address, an unusual situation can arise under certain circumstances. The client system can be configured for IPv6 but connected to a network that is not configured to route IPv6 packets to the Internet. Be sure to test for end-to-end connectivity before you switch to the dual-stack endpoints.

Log Entries – Log entries will include the IPv4 or IPv6 address, as appropriate. If you analyze your log files using internal or third-party applications, you should ensure that they are able to recognize and process entries that include an IPv6 address.

S3 Feature Support – IPv6 support is available for all S3 features with the exception of Website Hosting, S3 Transfer Acceleration, and access via BitTorrent.

Region Support – IPv6 support is available in all commercial AWS Regions and in AWS GovCloud (US). It is not available in the China (Beijing) Region.

Jeff;

New – Bring Your Own Keys with AWS Key Management Service

The content below is taken from the original (New – Bring Your Own Keys with AWS Key Management Service), to continue reading please visit the site. Remember to respect the Author & Copyright.

AWS Key Management Service (KMS) provides you with seamless, centralized control over your encryption keys. Our customers have told us that they love this fully managed service because it automatically handles all of the availability, scalability, physical security, and hardware maintenance for the underlying Key Management Infrastructure (KMI). It also centralizes key management, with one dashboard that offers creation, rotation, and lifecycle management functions. With no up-front cost and usage-based pricing that starts at $1 per Customer Master Key (CMK) per month, KMS makes it easy for you to encrypt data stored in S3, EBS, RDS, Redshift, and any other AWS service that’s integrated with KMS.

Many AWS customers use KMS to create and manage their keys. A few, however, would like to maintain local control over their keys while still taking advantage of the other features offered by KMS. Our customers tell us that local control over the generation and storage of keys would help them meet their security and compliance requirements in order to run their most sensitive workloads in the cloud.

Bring Your Own Keys
In order to support this important use case, I am happy to announce that you can now bring your own keys to KMS. This allows you to protect extremely sensitive workloads and to maintain a secure copy of the keys outside of AWS. This new feature allows you to import keys from any key management and HSM (Hardware Security Module) solution that supports the RSA PKCS #1 standard, and use them with AWS services and your own applications. It also works in concert with AWS CloudTrail to provide you with detailed auditing information. Putting it all together, you get greater control over the lifecycle and durability of your keys while you use AWS to provide high availability. Most key management solutions in use today use an HSM in the back end, but not all HSMs provide a key management solution.

The import process can be initiated from the AWS Management Console, AWS Command Line Interface (CLI), or by making calls to the KMS API. Because you never want to transmit secret keys in the open, the import process requires you to wrap the key in your KMI beforehand with a public key provided by KMS that is unique to your account. You can use the PKCS #1 scheme of your choice to wrap the key.

Following the directions (Importing Key Material in AWS Key Management Service), I started out by clicking on Create key in the KMS Console:

I entered an Alias and a Description, selected External, and checked the “I understand…” checkbox:

Then I picked the set of IAM users that have permission to use the KMS APIs to administer the key (this step applies to both KMS and External keys, as does the next one):

Then I picked the set of IAM users that can use the key to encrypt and decrypt data:

I verified the key policy, and then I downloaded my wrapping key and my import token. The wrapping key is the 2048-bit RSA public key that I’ll use to encrypt the 256-bit secret key I want to import into KMS. The import token contains metadata to ensure that my exported key can be imported into KMS correctly.

I opened up the ZIP file and put the wrapping key into a directory on my EC2 instance. Then I used the openssl command twice: once to generate my secret key and a second time to wrap the secret key with the wrapping key. Note that I used openssl as a convenient way to generate a 256-bit key and prepare it for import. For production data, you should use a more secure method (preferably a commercial key management or HSM solution) of generating and storing the local copy of your keys.

$ openssl rand -out plain_text_aes_key.bin 32
$ openssl rsautl -encrypt -in plain_text_aes_key.bin -oaep \
  -inkey wrappingKey_fcb572d3-6680-449c-91ab-ac3a5c07dc09_0804104355 \
  -pubin -keyform DER -out enc.aes.key

Finally, I brought it all together by checking “I am ready to upload…”  and clicking on Next, then specifying my key materials along with an expiration time for the key. Since the key will be unusable by AWS after the expiration date, you may want to choose the option where the key doesn’t expire until you better understand your requirements. You can always re-import the same key and reset the expiration time later.

I clicked on Finish and the key was Enabled and ready for me to use:

And that’s all I had to do!

Because I set an expiration date for the key, KMS automatically created a CloudWatch metric to track the remaining time until the key expires. I can create a CloudWatch Alarm for this metric as a reminder to re-import the key when it is about to expire. When the key expires, a CloudWatch Event will be generated; I can use this to take an action programmatically.

Available Now
This new feature is now available in AWS GovCloud (US) and all commercial AWS regions except for China (Beijing) and you can start using it today.

Jeff;

Autodesk Just Gave Every Fab Lab Access to $25,000 in Design Software

The content below is taken from the original (Autodesk Just Gave Every Fab Lab Access to $25,000 in Design Software), to continue reading please visit the site. Remember to respect the Author & Copyright.

product-design-collection-badge-1024Autodesk has just announced that it will be giving away almost $25,000 worth of CAD software licenses to registered Fab Labs.

Read more on MAKE

The post Autodesk Just Gave Every Fab Lab Access to $25,000 in Design Software appeared first on Make: DIY Projects and Ideas for Makers.

This Calculator Makes Sure You Always Get the Most Pizza For Your Money

The content below is taken from the original (This Calculator Makes Sure You Always Get the Most Pizza For Your Money), to continue reading please visit the site. Remember to respect the Author & Copyright.

If you want to get really mathematical, this calculator will tell you exactly how much more pizza you’re getting with a larger size.

The 16 most pivotal events in Windows history

The content below is taken from the original (The 16 most pivotal events in Windows history), to continue reading please visit the site. Remember to respect the Author & Copyright.

Maybe you thought every pivotal Windows moment was a product release. Not so. As good as it was, Windows XP also unleashed Windows Genuine Advantage—or what we now refer to as “activation”—upon an unsuspecting world. It was the first step in evolving Windows from a “hobby” to what some would refer to as “Micro$oft.”

This attitude was nothing new. In 1976, Bill Gates penned “An Open Letter to Hobbyists,” where he complained that the amount of royalties paid by customers using its BASIC software amounted to about $2 per hour. “Most directly, the thing you do is theft,” Gates wrote, essentially equating sharing code with outright stealing.

Microsoft sought to curtail this activity with the release of Windows Genuine Advantage, which stealthily installed itself onto millions of PCs by way of a high-priority “update.” (Sound familiar?) Windows Genuine Advantage consisted of two parts, one to actually validate the OS and another to inform users whether they had an illegal installation: In 2006, Microsoft said it had found about 60 million illegal installations that failed validation.

Now? Virtually every standalone product Microsoft sells comes with its own software protections and licenses. If you want a “hobby” OS, you run Linux—which Microsoft also spent millions trying to discredit, to no avail.

Thousands of Amiga games now available to play in your web browser. Here are the best ones

The content below is taken from the original (Thousands of Amiga games now available to play in your web browser. Here are the best ones), to continue reading please visit the site. Remember to respect the Author & Copyright.

Batman The Movie
The Internet Archive is at it again, this time uploading a massive collection of over 2,000 classic Amiga games for you to play. These are directly playable in your web browser, no emulator […]

Create a Self-Signed Certificate Using PowerShell

The content below is taken from the original (Create a Self-Signed Certificate Using PowerShell), to continue reading please visit the site. Remember to respect the Author & Copyright.

security-red-hero-img

security-red-hero-img

In today’s Ask the Admin, I’ll show you how to quickly create a self-signed certificate.

Self-signed certificates are not recommended for use in production environments, but come in handy for test scenarios where a certificate is a requirement but you don’t have the time or resources to either buy a certificate or deploy your own Public Key Infrastructure (PKI).

Create a self-signed certificate using PowerShell (Image Credit: Russell Smith)

Create a self-signed certificate using PowerShell (Image Credit: Russell Smith)

But generating self-signed certificates in Windows has traditionally been a bit of a pain, at least if you didn’t have Visual Studio or IIS on hand, as both these products include the ability to generate self-signed certificates. The makecert command line tool was otherwise the “go to” tool, but was only available as part of the Windows SDK, which is a hefty product to download and install just for the sake of using makecert.

Starting in PowerShell version 4.0, Microsoft introduced the New-SelfSignedCertificate cmdlet, making it much easier to create self-signed certificates. To get started, you’ll need a Windows device running PowerShell 4.0 or higher.

Sponsored

  • Open a PowerShell prompt. In Windows 10, type powershell in the search dialog on the taskbar, right-click Windows PowerShell in the list of app results, select Run as administrator from the menu and then enter an administrator username and password. The New-SelfSignedCertificate can only install certificates to the My certificate store, and that requires local administrator rights on the device.
  • If you’re running a different version of Windows, check the PowerShell version by running the code shown below.
$PSVersionTable.PSVersion

If you need to update PowerShell to version 5, you can download the Windows Management Framework for Windows 7 and Windows 8.1 here.

  • Now run the New-SelfSignedCertificate cmdlet as shown below to add a certificate to the local store on your PC, replacing testcert.petri.com with the fully qualified domain name (FQDN) that you’d like to use.
$cert = New-SelfSignedCertificate -certstorelocation cert:\localmachine\my -dnsname testcert.petri.com

The next step is to export a self-signed certificate. But first we’ll need to create a password as shown below:

$pwd = ConvertTo-SecureString -String ‘passw0rd!’ -Force -AsPlainText

Now we can export a self-signed certificate using the Export-PfxCertificate cmdlet. We’ll use the password ($pwd) created above, and create an additional string ($path), which specifies the path to the certificate created with New-SelfSignedCertificate cmdlet.

$path = 'cert:\localMachine\my\' + $cert.thumbprint Export-PfxCertificate -cert $path -FilePath c:\temp\cert.pfx -Password $pwd

Sponsored

Note that the c:\temp directory, or whatever directory you specify in the -FilePath parameter, must already exist. You can now import the cert.pfx file to install the certificate.

The post Create a Self-Signed Certificate Using PowerShell appeared first on Petri.

Alibaba Offers to Help Global Tech Companies Navigate China

The content below is taken from the original (Alibaba Offers to Help Global Tech Companies Navigate China), to continue reading please visit the site. Remember to respect the Author & Copyright.

Alibaba Offers to Help Global Tech Companies Navigate China

(Bloomberg) — Alibaba is extending a hand to companies such as SAP keen on operating in China, proffering a window into a market that’s increasingly hostile to foreign technology.

China’s largest e-commerce company is aiming to help them comply with local regulations and sell their products, as it seeks news areas of growth to combat a slowing economy at home. Its new AliLaunch program makes use of its cloud computing platform and can help clients with joint ventures and marketing. Its biggest customer so far is Germany’s SAP, which will sell its Hana data-software and services on Alibaba’s cloud.

Securing an influential Chinese partner has become key to cracking the domestic market. China has championed homegrown services over foreign technology, after saying last year it will block software, servers and computing equipment. A tightening of regulations on everything from data to content has also threatened the ability of U.S. companies to participate in China’s $465 billion market for information products.

Alibaba Cloud “‘is able to help its overseas technology partners comply with data security laws in the country,” Alibaba Vice President Yu Sicheng told a conference in Beijing. The company said it aims to sign up 50 partners over the next 12 months.

Alibaba is betting on internet-based computing and big data to boost growth in the next decade. The company is exploring artificial intelligence to help provide real-time comments for basketball games, predict traffic or public sentiment. While the cloud division contributed just 4.7 percent of revenue in the March quarter, it’s Alibaba’s fastest-growing business and a primary driver of growth over the longer term.

See also: Top Cloud Providers Made $11B on IaaS in 2015, but It’s Only the Beginning

Office 365 training courses to increase your expertise

The content below is taken from the original (Office 365 training courses to increase your expertise), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft’s Office 365 subscription service makes the company’s most popular apps and business tools available via the cloud, which means the features change often. Office 365 users, including beginners and veteran IT professionals, have a wide range of options for related training tools, depending on their needs and levels of expertise. Many free, basic training classes are available to teach the ins and outs of Office 365, but IT professionals will likely benefit more from intensive, and sometimes costly, seminars that can help prepare for formal tech support certification.

Many IT pros, for example, must pass Microsoft Certified Solutions Associate (MCSA) exams to qualify as cloud applications administrators focused on managing Office 365. Microsoft’s Office 365 suite constantly changes as the company introduces new versions of apps, and Microsoft updates the MCSA exams accordingly.

The following Office 365 training classes cover everything from basic training and demonstrations to comprehensive studies for certification. 

Free Microsoft Office 365 training courses

Microsoft’s Office Training Center is the most obvious place for beginners to get started on their Office 365 journeys. People can learn the basics of Office 365 and access quick-start guides for the latest versions of Word, Excel, PowerPoint, Outlook and OneNote. Microsoft also provides scenario-based training to demonstrate Office 365’s team productivity features, such as how coworkers can save files to the cloud, and then share and co-edit them, and collaborate via Skype. 

The company also has a thorough offering of videos designed to help customers explore the key features of Office 365, group administration, security and compliance, identity management, Exchange, data loss prevention policies and archiving. And the Microsoft Virtual Academy provides a series of free Office 365 courses for IT pros, ranging from fundamentals to identity management, advanced services, and tools for administration.

[Related: Office 365 gets new Word, PowerPoint and Outlook features]

Lynda.com is another valuable online resource for learning the essentials of Office 365. The video-based training site, which LinkedIn acquired in April 2015, offers a collection of videos on the latest Office 365 features, as well as training overviews for Word, Outlook, Excel and PowerPoint.

Office 365 IT training and certifications

Microsoft supports a group of professional trainers and educators who can help prepare IT pros for MCSA certification. These training courses can be done in person or remotely via video, and the cost varies considerably. Some Office 365 training courses last for weeks, while other workshops last as long as individuals need them. The following courses are just a small sample of the many Office 365 training options currently available to help IT administrators prepare for MCSA certification for Office 365.

ONLC Training Centers offer a remote instructor-led IT training course called “Enabling and Managing Office 365,” for $2,795. The five-day course is aimed at IT professionals who will evaluate, plan, deploy and operate Office 365 services.

NetCom Learning offers training courses that range from a single hour each to 40 hours. The company’s Microsoft Office 365 courses are designed to help IT professionals learn about the platform’s infrastructure, how to manage identities and services, customize features, file management and collaboration.

QA currently offers nine courses for Office 365 administration, ranging from one to five days each. The professional training company provides curriculums focused on administration and deployment of Office 365 Exchange, SharePoint and Skype. And it also offers additional courses for developing with Office 365 APIs, designing for Office 365 infrastructure and a technical overview of Office 365 for IT professionals. 

Koenig Solutions offers a handful of Microsoft Office 365 training and certification classes. The company provides an introductory class and courses focused on messaging, infrastructure and end users support. A new five-day course from Koenig aims to help IT professionals plan, deploy, operate and evaluate Office 365 services, and it costs $1,990. 

CED Solutions offers a six-day training course for MCSA Office 365 certification. The course focuses on the skills required to set up and support Office 365 with the proper user identities and support technologies.

This story, “Office 365 training courses to increase your expertise” was originally published by

CIO.

Windows Store for Business

The content below is taken from the original (Windows Store for Business), to continue reading please visit the site. Remember to respect the Author & Copyright.

Windows Store for Business home page [Credit: Microsoft]

Windows Store for Business home page [Credit: Microsoft]

Windows Store for Business home page [Credit: Microsoft]

In today’s Ask the Admin, I’ll take a look at Microsoft’s Windows Store for Business, which was launched at the end of 2015.

Windows Store for Business (Image Credit: Russell Smith)

Windows Store for Business (Image Credit: Russell Smith)

If you’re familiar with the Windows Store, a curated app store for consumers that first appeared in Windows 8, then the Windows Store for Business extends that model for enterprises in the form of a web-based portal and is available in 21 markets. Windows Store for Business can be accessed by anyone who’s signed up for the service, but businesses can also use it to create their own private portals for distributing purchased apps or apps developed in-house.

The Windows Store for Business makes managing volume licensing easier too, giving organizations control over purchasing administration and licensing through integration with Azure Active Directory (AAD). Instead of requiring a Microsoft ID, Windows Store for Business allows apps to be purchased under an organizational identity, and licenses can be revoked and reissued as required. For more information on AAD, see What is Azure Active Directory? on the Petri IT Knowledgebase.

Windows Store for Business basics

To get started with Windows Store for Business, you’ll need an AAD account that has Global Administrator permissions for your tenant, and employees who need access to the store will also need AAD accounts if you don’t have infrastructure in place to distribute offline apps, which requires Microsoft System Center Configuration Manager (SCCM), Intune, or other Mobile Device Management (MDM) compatible service.

Licensing apps in the Windows Store for Business (Image Credit: Microsoft)

Licensing apps in the Windows Store for Business (Image Credit: Microsoft)

Apps can be assigned to an organization’s private store, from which users can download apps manually, or apps can be assigned directly to users or teams. Only AAD Global and Billing Administrators can purchase and distribute apps. It’s worth noting that disconnected (offline) licensing, where apps can be distributed using SCCM or other solution, is only available for purchased apps if the developer has enabled Disconnected (offline) licensing.

Sponsored

Custom Line-of-Business apps

If you have custom line-of-business (LOB) apps to distribute, they can be added to your organization’s private portal in Windows Store for Business using the Windows Dev Center, and then distributed using MDM, SCCM or Intune. To publish LOB apps, you’ll need a developer account in the Windows Dev Center.

Private portal in the Windows Store for Business (Image Credit: Microsoft)

Private portal in the Windows Store for Business (Image Credit: Microsoft)

 

Sponsored

Although Windows Store for Business is only a version 1 product at this point, it’s a welcome development for organizations wanting to purchase or distribute their own Universal Windows Platform (UWP) apps or purchase in volume from the store. With the ability to package Win32 desktop apps so that they can be distributed via Windows Store in a UWP wrapper in the Anniversary Update for Windows 10, the Windows Store for Business will become even more useful for enterprises that want to manage app distribution.

 

 

The post Windows Store for Business appeared first on Petri.

Cognitive computing: IBM uses phase-change material to model your brain’s neurons

The content below is taken from the original (Cognitive computing: IBM uses phase-change material to model your brain’s neurons), to continue reading please visit the site. Remember to respect the Author & Copyright.

IBM scientists claimed – for the first time – to have created artificial spiking neurons using a phase-change material, opening up the possibilities of building a neural network that could be used for AI.

The brain is the biggest inspiration for researchers working in cognitive computing: the exact mechanisms that describe how a brain learns remains a mystery, but the whitecoats know it operates much better than any computer.

To capture the essence of intelligence, researchers turned to mimicking the brain. IBM has built artificial neurons that can fire and carry an electric pulse, recreating the biological processes happening in grey matter.

Biology versus technology

In biological neurons, two thin lipid layers keep any electrical charge within the cell. If an impulse carried along by a dendrite – the long spines of the neuron – is large enough, it can excite the electrical potential in between the lipid layers and the neuron fires a zap of electricity.

Photo credit: Shutterstock

In artificial neurons, however, the lipid layers are replaced with electrodes and another layer of a chalcogenide-based phase-change material sandwiched in between. Input signals pass through the artificial neuron and increased the electric potential of the electrodes. If the voltage pulses are great enough, the electric current passing through will melt the phase-change material, increasing its conductivity.

The electrical current flowing through it increases. Once the conductance reaches a threshold level, the electric pulse becomes large enough to fire and the phase-change device resets. The chalcogenide-based material returns back to its crystalline phase.

The “integrate-and-fire” mechanism is consistent across a range of timescales and frequencies similar to the brain (108 nanoseconds corresponding to a 10Hz update frequency) and beyond.

The ability for the phase-change material to reset means that the artificial neuron can be reused. The switching cycles between a crystalline and amorphous phase can be repeated 1012 times, corresponding to over 300 years of operation if the artificial neuron was working at a frequency of 100Hz, Big Blue’s paper stated.

Another feature IBM has achieved when imitating biology is the randomness of artificial neurons – a feature known as stochasticity.

The positions of the atoms in the material are never the same after the artificial neuron goes through the integrate-and-fire process. The change in phase alters the thickness of the material after it is fired and reset every time, which means each firing event is slightly different.

This is where neuromorphic computing diverges from conventional computing, Tomas Tuma, lead author of the study and researcher working at IBM’s Zurich Research Laboratory, told The Register.

“Conventional computers are never perfect, but any randomness is suppressed. In neuromorphic computing, however, we don’t mind the randomness. Actually, this random behaviour is parallel to the brain. Not all the neurons in the brain work the same, some are dead or not as effective,” Tuma said.

Stochasticity is actually essential in harnessing the full power of neural networks, Evangelos Eleftheriou, co-author of the study and IBM Fellow at the Zurich Research Laboratory, added.

Machine intelligence

A single neuron is not as effective as a network of neurons for unsupervised learning – a type of machine learning used in AI.

The artificial neuron has the potential to detect correlations in large streams of data that act as the input signal. IBM used 1,000 streams of binary events, where 100 were mutually correlated as the input signal.

Initially, the neuron fired at a high rate as it tried to find the correlations between the signal. But over time, the system evolves and the feedback loop means uncorrelated signals are more likely to be depressed and correlated signals begin to take over.

The strength of correlation between signals results in a growing electric pulse which will eventually lead to a large spike once the 100 correlated signals have been singled out.

Neural networks have a greater ability to make sense of data quickly as they reach higher levels of computing power generated from many neurons. The input signal could be signals fed from other neurons, which would result in a more thorough and quicker search for correlations between data. The output signals that trickled down from the neurons would become increasingly refined as it passed through layers of neurons.

Diagram of how artificial neuron works. Photo credit: Nature Nanotechnology and Tuma et al.

Pattern recognition is the main aim in machine learning, said Professor Leslie Smith, a researcher working in the Cognitive Computation research group at the University of Stirling, who was not involved in IBM’s research.

“You want neurons firing as it means that patterns can be spotted in real time,” Smith told The Register. “For example, you don’t want autonomous vehicles to analyse images one by one. It needs to be looking and analysing data all the time to adapt to its surroundings,” Smith said.

Neuromorphic computing is a relatively new area and is growing in popularity. “It grew in the 1960s with Marvin Minsky and Seymore Papert, then died down in the 1980’s. But it’s coming back into vogue again,” Smith said.

The number of possible applications for neuromorphic computing stretches beyond AI. It could also be used in the Internet of Things craze, sensors that operate on cognitive computing could collect and analyze volumes of weather data collected at the edge for faster forecasts, IBM said.

The need to deal with huge sets of data quickly and at low power is becoming increasingly apparent. “We are in the cognitive era of computing,” Tuma told The Register. “In the future, neuromorphic chips could even be used as co-processors,” he claimed. ®

Sponsored:
Accelerated Computing and the Democratization of Supercomputing

Build, buy or rent your IoT communications stack?

The content below is taken from the original (Build, buy or rent your IoT communications stack?), to continue reading please visit the site. Remember to respect the Author & Copyright.

This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.

The Internet of Things (IoT) is shepherding in the next communication revolution – machines communicating with other machines – at a scale and volume unfathomable until only very recently. The Internet comprises some 1 billion sites and around 5 billion devices. Predictions of growth for the next five years vary from the mundane 25 billion to the wild extremes of 50 or even 100 billion Internet connected devices.

To make IoT successful, developers will need to connect these billions of devices in a meaningful way, delivering truly distributed machine-to-machine (M2M) computing and device control.  However, the IoT M2M challenge is also about ensuring security, reliability, privacy and realtime communications on a plethora of devices hobbled by small, low power CPUs and limited memory.  And the question is, do you build or buy the IoT communications stack?

One of the biggest benefits gained by building your stack is you can design software that does exactly what you want, meeting every single need of your application, but that comes with both short- and long-term costs and a lot of unknowns.  As you investigate building your own IoT communication stack you should not only consider the obvious build and maintenance costs and time-to-market factors, but also the following:

  • Security—Security is emerging as the most important aspect of any digital system, from cars to homes, to wristwatches and phones. Security is even more important for broader IoT, as more and more devices interact with the physical world. Secure communications, security of the device and security of the application servers are just a few of the many aspects of IoT security. Once hackers break in, they can wreak havoc on your devices, your users and your business. Do you have up-to-date encryption and security expertise in your development team? Do you have the ability to continuously update your communication stack to keep up with the latest vulnerabilities? Will you be able to bake in security throughout your application, servers and devices?
  • Push vs. pull notifications—Implementing pull can be easy, but can have major impacts on device battery life and the amount of data transmitted using potentially expensive networks. For example, when implementing a pull architecture, the polling must be set frequently enough so your devices or servers get the data in a timely fashion. In order to do so, you must have servers available to handle a lot of empty requests, which can be costly.   Implementing push is much more difficult, especially if the application must support a wide variety of devices, operating systems and networks.  In order to support live push notifications you must establish a secure open socket connection, which can open up security risks if not done correctly.  However, the advantages lie in getting data to the right devices at the right time instantly anywhere in the world. Which do you need for your application?
  • Access management—How do you ensure your communication goes to the correct devices and users? For that matter, how do you identify your different users and devices? How do you enforce permissions? Do you need two factor authentication?
  • Mobility—Many IoT devices will be mobile, accessing the internet over Wi-Fi or cellular data networks. This means constantly changing device addresses, intermittent connectivity, long periods of communication failures and noisy, error-filled messages. Do you have the ability to limit data transfer on expensive or slow networks? Do you need message-level checksums? How do you handle missed messages and redelivery? Can you handle rapidly changing addresses? How well do you understand how mobility affects the intricacies of networking protocols?
  • Network Connectivity—Can your application withstand network connectivity issues including latency, jitter, slow links and intermittency? Do you need guaranteed quality of service (QoS)?
  • Presence detection—Can you detect when a user or device goes online or offline? If you lose connection with a device, does your application care if there was a network failure, a device failure or the user decided to exit the application?
  • Message storage and playback—Do you have the ability to store messages destined for an unreachable device and the ability to play back those messages when the device reconnects? Does your device have the ability to store and playback messages destined for the server or other devices during connectivity outages?
  • Realtime—What does realtime mean for your IoT application? Do messages need to arrive within a certain time frame? Do you need to acknowledge messages? What happens if messages arrive out of order?
  • Analytics—How do you measure your communications infrastructure to ensure reliability, security and efficiency?

Once you have decided how you will address these critical IoT communication issues, you must next investigate the server-side infrastructure to support your IoT application. This leads to a different set of issues, including:

  • Scaling—Do you need to quickly scale up and down your communication capacity to support peak times or a rapidly growing business? Can you automate the scaling? Can you leverage third-party service providers to augment your internal infrastructure for rapid elasticity?
  • Security and privacy—How do you build-in and maintain a secure infrastructure? What is your plan for maintaining and patching servers, storage, and routers? Is your infrastructure susceptible to hacking, phishing or social engineering? What about physical security? How do you handle regulatory requirements such as HIPPA and SOX? How do you handle law enforcement issues?
  • Geophysical presence—Do you need a geophysical presence to support adequate network latencies or to fulfill regulatory compliance? Do your customers require non-US data storage?
  • Uptime and SLAs—What are your applications’ uptime requirements? Do you have service guarantees (service level agreements or SLAs)? How will you meet your requirements? Do you have disaster recovery plans and sites? Do you have the ability to test your infrastructure for resiliency?
  • Support—Does your infrastructure require 24x7x365 staffing and support?

Addressing all of these issues takes a highly specialized and technically proficient engineering team. You must also invest in the development and testing efforts. Most importantly, building your own robust, secure, reliable IoT communication stack takes a significant amount of time, which can impact your application time-to-market. Engineering talent, time and capital are all challenges for large organizations, and may put developing a custom solution out of reach for small teams and startups.

Buy or rent an IoT communication stack?

Instead of building your own custom stack, you can acquire and run your own. There are numerous open source and commercial communication solutions available that address many of the IoT M2M issues. However, open source software often lacks adequate documentation and support. And feature roadmaps, schedules and bug fixes are at the whim and mercy of the stack maintainers.

Commercial offerings may be a better option as they typically provide support and bug fixes for both client libraries and server side components.  However, teams tend to underestimate the time and costs associated with deploying and maintaining a 24/7 distributed infrastructure needed to support real-time applications.  

Instead of building your own, or acquiring the entire stack, you can “rent” the software stack and server side infrastructure. Software-as-a-service (SaaS) companies are now providing end-to-end IoT communications environments, combining software for virtually any device with a complete server side communication infrastructure. These service providers solve most, if not all, of the communication and server issues.

SaaS providers have already invested the time, money, and resources to become the authorities in IoT communications. They have dedicated experts in security, networking, communications and operations. They have built scalable, secure, reliable and resilient server side infrastructures to support large and rapidly expanding IoT applications.

With multiple providers, you have a choice of features, functionality and services. Some vendors choose to be generalists, providing a comprehensive suite of services to meet most basic IoT requirements. Other providers focus on becoming specialists, providing unique features such as minimum worldwide latency, global redundancy and uptime guarantees.

SaaS providers offer multiple pricing models, from per-device or per-node charges to charges based on transactions or data volume. Other factors that affect pricing include SLAs, QoS, geophysical presence and per-feature charges. This gives you flexibility and the ability to control your costs based on your specific application. It also allows you to start small and grow as your application grows, limiting your capital investment.

SaaS solutions are designed to be quick and easy to integrate into your application. In fact, some solutions require only a few lines of code. This allows you to focus on your application, ignoring all of the complexities of IoT communications. The speed and simplicity of SaaS solutions lend themselves to the fail early, fail fast and fail often development model, ensuring that you rapidly innovate towards the most valuable solutions and once you launch, they can scale with your business.

In conclusion, as with most other business decisions, the build versus buy decision boils down to time versus money, CapEx versus OpEx, and Total Cost of Ownership. Building your own IoT communication stack, with or without commercial or open source software, ensures that you have every feature you need. However, it takes a significant investment of time, capital and resources and requires highly specialized technical expertise. 

You should only build your own solution if you can a) create a competitive advantage with your custom software, b) build a big enough business to spread the cost of your proprietary system over a large number of clients, minimizing the per-client cost of the effort c) afford the long time to market and d) have the in-house expertise needed to build and maintain a complex distributed computing environment. 

Given the IoT is still in the early stages the more practical and pragmatic strategy is to use a SaaS provider, enabling you to develop robust, secure and reliable IoT applications while drastically shrinking your capital investment, development.

OpenStack Developer Mailing List Digest July 23 to August 5

The content below is taken from the original (OpenStack Developer Mailing List Digest July 23 to August 5), to continue reading please visit the site. Remember to respect the Author & Copyright.

Equal Chances For All Projects

  • A proposal [1] in the OpenStack governance repository that aims to have everything across OpenStack be plugin based, or allow all projects access to the same internal APIs.
  • Some projects have plugin interfaces, but also have project integrations in tree. Makes it difficult to see what a plugin can, and should do.
  • With the big tent, we wanted to move to a flatter model, removing the old integrated status.
  • Examples:
    • Standard command line interface or UI for setting quotas, it’s hard for projects that aren’t Nova, Neutron or Cinder.
      • Quotas in Horizon for example are set in “admin → quotas”, but plugins can’t be in here.
      • OpenStack Client has “openstack quota set –instances 10” for example.
      • Steve Martinelli who contributes to OpenStack Client has verified that this is not by design, but lack of contributor resources).
    • Tempest plugins using unstable resources (e.g. setting up users, projects for running tests on). Projects in tree have the benefit of any change having to pass gate before it merges.
      • Specification to work towards addressing this [2].
      • The stable interface still needs work work in increasing what it exposes to plugins. This requires a bit of work and is prioritized by the QA team.
        • All tests in Tempest consume the stable interface.
      • Since a lot of plugins use the unstable interfaces, the QA team is attempting to maintain backwards compatibility until a stable version is available, which is not always an option.
      • Tempest.lib [3] is what’s considered the “stable interface”
  • Given the amount of in progress work for the examples given, there doesn’t seem a disagreement with the overall goal to warrant a global rule or policy.
  • An existing policy exists [4] with how horizontal teams should work with all projects.
  • Full thread and continued thread

Establishing Project-wide Goals

  • An outcome from the leadership training session that members of the Technical Committee participated in was setting community-wide goals for accomplishing specific technical tasks to get projects synced up.
  • There is a change to the governance repository [5] that sets the expectations of what makes a good goal and how teams are meant to approach working on them.
  • Two goals proposed:
    • Support Python 3.5 [6]
    • Switch to Oslo libraries [7]
  • The Technical Committee wants to set a reasonable number of small goals for a release. Not invasive top-down design mandates that teams would want to resist.
    • Teams could possibly have a good reason for not wanting or being able to fulfill a goal. It just needs to be documented and not result in being removed from the big tent.
  • Full thread

API Working Group News

  • Cinder is lookig into exposing resource capabilities.
  • Guidelines under review:
    • Beginning set of guidelines for URIs [10]
    • Add description of pagination parameters [11]
  • Full thread

Big Tent?

  • Should we consider the big tent is the right approach because of some noticed downsides:
    • Projects not working together because of fear of adding extra dependencies.
    • Reimplementing features, badly, instead of standardizing.
    • More projects created due to politics, not technical reasons.
    • Less cross-project communication.
    • Operator pain in assembling loose projects.
    • Architectural decisions made at individual project level.
  • Specific examples:
    • Magnum trying not to use Barbican.
    • Horizon discussions at the summit wanting to use Zaqar for updates instead of polling, but couldn’t depend on a non-widely deployed subsystem.
    • Incompatible virtual machine communication:
      • Sahara uses ssh, which doesn’t play well with tenant networks.
      • Trove uses rabbit for the guest agent to talk back to the controller.
  • The overall goal of big tent was to make the community more inclusive, and these issues pre-date big tent.
  • The only thing that can really force people to adopt a project is DefCore, but that comes with a major chicken and egg problem.
  • What’s not happening today is a common standard that everything can move towards. Clint Byrum’s proposal on an Architecture working group might be a way forward.
  • The Technical Committee is a balancing act of trying to provide this, but not interfere too much with a project in which members may not have specific experience with the project’s domain.
  • Sahara has had some success with integration with other projects.
    • Kilo/Liberty integrating with Heat for deploying clusters.
    • Liberty/Mitaka integrated Barbican.
    • Using Manila shares for datasources.
    • Liberty/Mitaka added Sahara support in OpenStack Client.
    • In progress, support with Designate.
  • Full thread

 

[1] – http://bit.ly/2b8ozH8

[2] – http://bit.ly/2b8oB4N

[3] – http://bit.ly/2b8nTBC
[4] – http://bit.ly/2b8pJ8w

[5] – http://bit.ly/2b8oSS2

[6] – http://bit.ly/2b8pk6a

[7] – http://bit.ly/2b8nG1l

[8] – http://bit.ly/2b8oNxN

[9] – http://bit.ly/2b8oC92

[10] – http://bit.ly/2b8oUt8

[11] – http://bit.ly/2b8pvyn

Raspberry Pi 3 Gets USB, Ethernet Boot

The content below is taken from the original (Raspberry Pi 3 Gets USB, Ethernet Boot), to continue reading please visit the site. Remember to respect the Author & Copyright.

The Raspberry Pi is a great computer, even if it doesn’t have SATA. For those of us who have lost a few SD cards to the inevitable corruption that comes from not shutting a Pi down properly, here’s something for you: USB Mass Storage Booting for the Raspberry Pi 3.

For the Raspberry Pi 1, 2, Compute Module, and Zero, there are two boot modes – SD boot, and USB Device boot, with USB Device boot only found on the Compute Module. [Gordon] over at the Raspberry Pi foundation spent a lot of time working on the Broadcom 2837 used in the Raspberry Pi 3, and found enough space in 32 kB to include SD boot, eMMC boot, SPI boot, NAND flash, FAT filesystem, GUID and MBR partitions, USB device, USB host, Ethernet device, and mass storage device support. You can now boot the Raspberry Pi 3 from just about anything.

The documentation for these new boot modes goes over the process of how to put an image on a USB thumb drive. It’s not too terribly different from the process of putting an image on an SD card, and the process will be streamlined somewhat in the next release of rpi-update. Some USB thumb drives do not work, but as long as you stick with a Sandisk or Samsung, you should be okay.

More interesting than USB booting is the ability for the Pi 3 to boot over the network. Booting over a network is nothing new – the Apple II could do it uphill both ways in the snow, but the most common use for the Pi is a dumb media player that connects to all your movies on network storage. With network booting, you can easily throw a Pi on a second TV and play all that media in a second room. Check out the network booting tutorial here.

Filed under: Raspberry Pi, slider