See What Everyone Was Tweeting Ten Years Ago

The content below is taken from the original ( See What Everyone Was Tweeting Ten Years Ago), to continue reading please visit the site. Remember to respect the Author & Copyright.

If you’re tired of the hell dimension that is present-day Twitter, internet renaissance man Andy Baio has the link for you: here’s what your Twitter feed would look like ten years ago today (if you followed all the people you follow now). Of course, you can only see tweets from people who were already on Twitter in…

Read more…

Sony shrinks its Digital Paper tablet down to a more manageable 10 inches

The content below is taken from the original ( Sony shrinks its Digital Paper tablet down to a more manageable 10 inches), to continue reading please visit the site. Remember to respect the Author & Copyright.

I had a great time last year with Sony’s catchily named catchily-named DPT-RP1, an e-paper tablet that’s perfect for reading PDFs and other big documents, but one of my main issues was simply how big the thing is. Light and thin but 13 inches across, the tablet was just unwieldy. Heeding (I assume) my advice, Sony is putting out a smaller version and I can’t wait to try it out.

At the time, I was comparing the RP1 with the reMarkable, a crowdfunded rival that offers fantastic writing ability but isn’t without its flaws. Watch this great video I made:

The 10-inch DPT-CP1 has a couple small differences from its larger sibling. The screen has a slightly lower resolution but should be the same PPI — it’s more of a cutout of the original screen than a miniaturization. And it’s considerably lighter: 240 grams to the 13-inch version’s 350. Considering the latter already felt almost alarmingly light, this one probably feels like it’ll float out of your hands and enter orbit.

More important are the software changes. There’s a new mobile app for iOS and Android that should make loading and sharing documents easier. A new screen-sharing screen sharing mode sounds handy but a little cumbrous — you have to plug it into a PC and then plug the PC into a display. And PDF handling has been improved so that you can jump to pages, zoom and pan pan, and scan through thumbnails more easily. Limited interaction (think checkboxes) is also possible.

There’s nothing that addresses my main issue with both the RP1 and the reMarkable: that it’s a pain to do anything substantial on the devices, such as edit or highlight in a document, and if you do, it’s a pain to bring that work into other environments.

So for now it looks like the Digital Paper series will remain mostly focused on consuming content rather than creating or modifying it. That’s fine — I loved reading stuff on the device, and mainly just wished it were a bit smaller. Now that Sony has granted that wish, it can get to work on the rest.

IBM built a handheld counterfeit goods detector

The content below is taken from the original ( IBM built a handheld counterfeit goods detector), to continue reading please visit the site. Remember to respect the Author & Copyright.

Just a month after IBM announced it's leveraging the blockchain to guarantee the provenance of diamonds, the company has revealed new AI-based technology that aims to tackle the issue of counterfeiting — a problem that costs $1.2 trillion globally….

This Computer Is As Quiet As The Mouse

The content below is taken from the original ( This Computer Is As Quiet As The Mouse), to continue reading please visit the site. Remember to respect the Author & Copyright.

[Tim aka tp69] built a completely silent desktop computer. It can’t be heard – at all. The average desktop will have several fans whirring inside – cooling the CPU, GPU, SMPS, and probably one more for enclosure circulation – all of which end up making quite a racket, decibel wise. Liquid cooling might help make it quieter, but the pump would still be a source of noise. To completely eliminate noise, you have to get rid of all the rotating / moving parts and use passive cooling.

[Tim]’s computer is built from standard, off-the-shelf parts but what’s interesting for us is the detailed build log. Knowing what goes inside such a build, the decisions required while choosing the parts and the various gotchas that you need to be aware of, all make it an engaging read.

It all starts with a cubic aluminum chassis designed to hold a mini-ITX motherboard. The top and side walls are essentially huge extruded heat sinks designed to efficiently carry heat away from inside the case. The heat is extracted and channeled away to the side panels via heat sinks embedded with sealed copper tubing filled with coolant fluid. Every part, from the motherboard onwards, needs to be selected to fit within the mechanical and thermal constraints of the enclosure. Using an upgrade kit available as an enclosure accessory allows [Tim] to use CPUs rated for a power dissipation of almost 100 W. This not only lets him narrow down his choice of motherboards, but also provides enough overhead for future upgrades. The GPU gets a similar heat extractor kit in exchange for the fan cooling assembly. A fanless power supply, selected for its power capacity as well as high-efficiency even under low loads, keeps the computer humming quietly, figuratively.

Once the computer was up and running, he spent some time analysing the thermal profile of his system to check if it was really worth all the effort. The numbers and charts look very promising. At 100% load, the AMD Ryzen 5 1600 CPU levelled off at 60 ºC (40 ºC above ambient) without any performance effect. And the outer enclosure temperature was 42 ºC — warm, but not dangerous. Of course, performance hinges around “ambient temperature”, so you have to start getting careful when that goes up.

Getting such silence comes at a price – some may consider it quite steep. [Tim] spent about A$3000 building this whole system, thanks in part due to high GPU prices because of demand from bitcoin mining. But cost is a relative measure. He’s spent less on this system compared to several of his earlier projects and it let’s him enjoy the sounds of nature instead of whiny cooling fans. Some would suggest a pair of ear buds would have been a super cheap solution, but he wanted a quiet computer, not something to cancel out every other sound in his surroundings.

22 essential security commands for Linux

The content below is taken from the original ( 22 essential security commands for Linux), to continue reading please visit the site. Remember to respect the Author & Copyright.

There are many aspects to security on Linux systems – from setting up accounts to ensuring that legitimate users have no more privilege than they need to do their jobs. This is look at some of the most essential security commands for day-to-day work on Linux systems.

To read this article in full, please click here

(Insider Story)

The Kata Containers project launches version 1.0 of its lightweight VMs for containers

The content below is taken from the original ( The Kata Containers project launches version 1.0 of its lightweight VMs for containers), to continue reading please visit the site. Remember to respect the Author & Copyright.

The Kata Containers project, the first non-OpenStack project hosted by the OpenStack Foundation, today launched version 1.0 of its system for running isolated container workloads. The idea behind Kata Containers, which is the result of the merger of two similar projects previously run by Intel and Hyper, is to offer developers a container-like experience with the same security and isolation features of a more traditional virtual machine.

To do this, Kata Containers implements a very lightweight virtual machine (VM) for every container. That means every container gets the same kind of hardware isolation that you would expect from a VM, but without the large overhead. But even though Kata Containers don’t fit the standard definition of a software container, they are still compatible with the Open Container Initiative specs and the container runtime interface of Kubernetes. While it’s hosted by the OpenStack Foundation, Kata Containers is meant to be platform- and architecture-agnostic.

Intel, Canonical and Red Hat have announced they are putting some financial support behind the project, and a large number of cloud vendors have announced additional support, too, including 99cloud, Google, Huawei, Mirantis, NetApp and SUSE.

With this version 1.0 release, the Kata community is signaling that the merger of the Intel and Hyper technology is complete and that the software is ready for production use.

Review: 55 BBC Micro Books on CD ROM

The content below is taken from the original ( Review: 55 BBC Micro Books on CD ROM), to continue reading please visit the site. Remember to respect the Author & Copyright.

Introduction Christopher Dewhurst, the Drag ‘n Drop Publications editor, has released an excellent compilation of 55 BBC Micro Books, all together on one CD ROM. For any RISC OS user who wants to have a go at BASIC programming these are an essential buy. Although biased towards the BBC Micro, quite a few of the […]

DNS in the cloud: Why and why not

The content below is taken from the original ( DNS in the cloud: Why and why not), to continue reading please visit the site. Remember to respect the Author & Copyright.

As enterprises consider outsourcing their IT infrastructure, they should consider moving their public authoritative DNS services to a cloud provider’s managed DNS service, but first they should understand the advantages and disadvantages.

To read this article in full, please click here

(Insider Story)

List of new options in Windows 10 Settings

The content below is taken from the original ( List of new options in Windows 10 Settings), to continue reading please visit the site. Remember to respect the Author & Copyright.

Windows 10 Settings

The most anticipated Windows 10 v1803 April 2018 Update was released recently and brought in a lot of new features. I’ve you been following the update, you might have already tried out a few of them. All Windows updates bring […]

This post List of new options in Windows 10 Settings is from TheWindowsClub.com.

Finnish university’s online AI course is open to everyone

The content below is taken from the original ( Finnish university’s online AI course is open to everyone), to continue reading please visit the site. Remember to respect the Author & Copyright.

Helsinki University in Finland has launched a course on artificial intelligence — one that's completely free and open to everyone around the world. Unlike Carnegie Mellon's new undergrad degree in AI, which the institution created to train future ex…

People with Dementia can DRESS Smarter

The content below is taken from the original ( People with Dementia can DRESS Smarter), to continue reading please visit the site. Remember to respect the Author & Copyright.

People with dementia have trouble with some of the things we take for granted, including dressing themselves. It can be a remarkably difficult task involving skills like balance, pattern recognition inside of other patterns, ordering, gross motor skill, and dexterity to name a few. Just because something is common, doesn’t mean it is easy. The good folks at NYU Rory Meyers College of Nursing, Arizona State University, and MGH Institute of Health Professions talked with a caregiver focus group to find a way for patients to regain their privacy and replace frustration with independence.

Although this is in the context of medical assistance, this represents one of the ways we can offload cognition or judgment to computers. The system works by detecting movement when someone approaches the dresser with five drawers. Vocal directions and green lights on the top drawer light up when it is time to open the drawer and don the clothing inside. Once the system detects the article is being worn appropriately, the next drawer’s light comes one. A camera seeks a matrix code on each piece of clothing, and if it times out, a caregiver is notified. There is no need for an internet connection, nor should one be given.

Currently, the system has a good track record with identifying the clothing, but it is not proficient at detecting when it is worn correctly, which could lead to frustrating false alarms. Matrix codes seemed like a logical choice since they could adhere to any article of clothing and get washed repeatedly but there has to be a more reliable way. Perhaps IR reflective threads could be sewn into clothing with varying stitch lengths, so the inside and outside patterns are inverted to detect when clothing is inside-out. Perhaps a combination of IR reflective and absorbing material could make large codes without being visible to the human eye. How would you make a machine-washable, machine-readable visual code?

Helping people with dementia is not easy but we are not afraid to start, like this music player. If matrix codes and barcodes get you moving, check out this hacked scrap-store barcode scanner.

Thank you, [Qes] for the tip.

Detect malicious activity using Azure Security Center and Azure Log Analytics

The content below is taken from the original ( Detect malicious activity using Azure Security Center and Azure Log Analytics), to continue reading please visit the site. Remember to respect the Author & Copyright.

This blog post was authored by Microsoft Threat Intelligence Center. the Azure Security Center team.

We have heard from our customers that investigating malicious activity on their systems can be tedious and knowing where to start is challenging. Azure Security Center makes it simple for you to respond to detected threats. It uses built-in behavioral analytics and machine learning to detect threats and generates alerts for the attempted or successful attacks. As discussed in a previous post, you can explore the alerts of detected threats through the Investigation Path, which uses Azure Log Analytics to show the relationship between all the entities involved in the attack. Today, we are going to explain to you how Security Center’s ability to detect threats using machine learning and Azure Log Analytics can help you keep pace with rapidly evolving cyberattacks.

Investigate anomalies on your systems using Azure Log Analytics

One method is to look at the trends of processes, accounts, and computers to understand when anomalous or rare processes and accounts are run on computers which indicates potentially malicious or unwanted activity. Run the below query against your data and note that what comes up is an anomaly or rare over the last 30 days. This query shows the processes run by computers and account groups over a week to see what is new and compare it to the behavior over the last 30 days. This technique can be applied to any of the logs provided in the Advanced Azure Log Analytics pane. In this example, I am using the Security Event table.

Please note the items in bold are an example of filtering your own results for noise and is not specifically required. The reason I have included it is to make it clear there will be certain items that are not run often and show up as anomalous when using this or similar queries, which are specific to your environment and may need manual exclusion to help focus the investigation. Please build your own list of “known good” items to filter out based on your environment.

let T = SecurityEvent
| where TimeGenerated >= ago(30d)
| extend Date = startofday(TimeGenerated)
| extend Process = ProcessName
| where Process != ""
| where Process != "-"
| where Process !contains "\\Windows\\System"
| where Process !contains "\\Program Files\\Microsoft\\"
| where Process !contains "\\Program Files\\Microsoft Monitoring Agent\\"
| where Process !contains "\\ProgramData\\"
| where Process !contains "\\Windows\\WinSxS\\"
| where Process !contains "\\Windows\\SoftwareDistribution\\"
| where Process !contains "\\mpsigstub.exe"
| where Process !contains "\\WindowsAzure\\GuestAgent"
| where Process !contains "\\Windows\\Servicing\\TrustedInstaller.exe"
| where Process !contains "\\Windows\\Microsoft.Net\\"
| where Process !contains "\\Packages\\Plugins\\"
| project Date, Process, Computer, Account
| summarize count() by Date, Process, Computer, Account
| sort by count_ desc nulls last;
T
| evaluate activity_counts_metrics(Process, Date, startofday(ago(30d)), startofday(now()), 1d, Process, Computer, Account)
| extend WeekDate = startofweek(Date)
| project WeekDate, Date, Process, PotentialAnomalyCount NewForWeek = new_dcount, Account, Computer
| join kind= inner
(
    T
    | evaluate activity_engagement(Process, Date, startofday(ago(30d)), startofday(now()),1d, 7d)
    | extend WeekDate = startofweek(Date)
    | project WeekDate, Date, Distribution1day = dcount_activities_inner, Distribution7days = dcount_activities_outer, Ratio = activity_ratio*100
)
on WeekDate, Date
| where PotentialAnomalyCount NewForWeek == 1 and Ratio < == 100
| project WeekDate, Date, Process, Account, Computer , PotentialAnomalyCount, NewForWeek, Distribution1day, Distribution7days, Ratio
| render barchart kind=stacked

When the above query is run, you will receive a TABLE similar to the item below, although the dates and referenced processes will be different. In this example, we can see when a specific process, computer and account had computer, and account has not been seen before based on week over week data for the last 30 days. Specifically, we can see regedit.exe portping.exe showed up in the week of 4/15 and on the specific date of 4/17, then PowerShell on 4/30 and then Procmon on 4/30 and 5/8 date of 4/16 for the first times each week during the last 30 days.

image

time in 30 days.

Table 1

You can also view the results in CHART mode and change the pivot of the bar CHART as seen below. For example, use the drop down and pivot on Computer instead of process and see the computers that launched this process.

Completed

Hover to see the specific computer and how many processes showed up for the first time.

Potential Anomaly Count

In the query above, we look at the items that run across more than one day, which is the ratio of less than 100. This is a way to parse the date and more easily understand the scope of when a process runs on a given computer. By looking at rare items that have run across multiple days, you can potentially detect manual activity by an attacker who is probing your environment for information that will further increase his attack surface.

We can alternatively look at the processes that ran only on 1 day of the last 30 days, which can be done by choosing only ratio == 100 in the above query, simply change the related line to this:

| where PotentialAnomalyCount NewForWeek == 1 and Ratio == 100 100. 

The above change to the query results in a different set of hits for rare processes and may indicate usage of a scripted attack to rapidly gather data from this system, system or several systems, or may just indicate attacker activity on a single day.

Lastly, we see several interactive processes run, which indicate an interactive logon, for example SQL Mgmt Studio process Ssms.exe. Potentially, this is an unexpected logon to this system and this query can help expose this type of anomaly in addition to unexpected processes.

image

Table 2

Once you have identified a computer or account you want to investigate, you can then dig in further on the full data for that computer. This can be done by opening a secondary query window and filtering only on the computer or account that you are interested in. Examples of this would be as follows. At that point, you can see what occurred around the anomalous or rare process execution time. We will select the portping.exe process and narrow the scope of the dates to allow for a closer look.  From the table above, we can see the Date[UTC] circled below. This date is rounded to the nearest day for the query to work properly, but this along with the computer and account used should allow us to focus in on the timeframe of when this was run on the computer.

image

Table 3

To focus in on the timeframe, we will use that date to provide our single day range. We can pass the range into the query by using standard date formats indicated below. Click on the + highlighted in yellow and paste the below query into your window.

In the results, the distinct time is marked in red. We will use that in a subsequent query.

SecurityEvent
| where TimeGenerated >= datetime(2018-04-16 00:00:00.000) and TimeGenerated <= datetime(2018-04-16 23:59:59.999)
| where Computer contains "Contoso-2016" and Account contains "ContosoAdmin"
| where Process contains "portping.exe"
| project TimeGenerated, Computer, Account, Process, CommandLine

Code

Now that we have the exact time, we can look at activity occurring with smaller time frames around that date. We usually use +5 minute and -5 minute blocks. For example:

SecurityEvent
| where TimeGenerated >= datetime(2018-04-16 19:10:00.000) and TimeGenerated <= datetime(2018-04-16 19:21:00.000)
| where Computer contains "Contoso-2016" and Account contains "ContosoAdmin"
//| where Process contains "portping.exe"
| project TimeGenerated, Computer, Account, Process, CommandLine

In the results below, we can easily see that someone was logged into the system via RDP. We know this because RDPClip.exe is being launched, which indicated they were copying and pasting between their host and the remote system.

Additionally, we see after the portping.exe activity that they are attempting to modify accounts or password functionality with the command netplwiz.exe or control userpasswords2.

They are then running Procmon.exe to see what other processes are running on the system. Generally this is done to understand what is available to the attacker to further exploit.

Query

At this point, this machine should be taken offline and investigated more deeply to understand the extent of the compromise.

Find hidden techniques commonly deployed by attackers using Azure Log Analytics

Most security experts have seen the techniques attackers use to hide the usage of commands on a system to avoid detection. While there are certainly methods to avoid even showing up on the command line, the obfuscation technique used below is regularly used by various levels of attackers.

Below we will decode a base64 encoded string in the command line data and look for common PowerShell methods that are used in attacks.

SecurityEvent
| where TimeGenerated >= ago(30d)
| where Process contains "powershell.exe" and CommandLine contains " -enc"
|extend b64 = extract("[A-Za-z0-9|+|=|/]{30,}", 0,CommandLine)
|extend utf8_decode=base64_decodestring(b64)
|extend decode =  replace ("\x00","", utf8_decode)
|where decode contains 'Gzip' or decode contains 'IEX' or decode contains 'Invoke' or decode contains '.MemoryStream'
| summarize by Computer, Account, decode, CommandLine

image

Table 4

As you can see, the results provide you with details about what was in the encoded command line and potentially what an attacker was attempting to do.

You can now use the details in the above query to see what was running during the same time by adding the time and computer to the same table. This allows you to easily connect it with other activity on the system, the process by which is described just above in detail. One thing to note is that you can add these automatically by expanding the event with the arrow in the first column of the row. Then hover over TimeGenerated and click the + button.

image

Time Generated

This will add in an entry like so into your query window:

| where TimeGenerated == todatetime('2018-04-24T02:00:00Z')

Modify the range of time like this:

SecurityEvent
| where TimeGenerated >= ago(30d)
| where Computer == "XXXXXXX"
| where TimeGenerated >= todatetime('2018-04-24T02:00:00Z')-5m and TimeGenerated <= todatetime('2018-04-24T02:00:00Z')+5m
| project TimeGenerated, Account, Computer, Process, CommandLine, ParentProcessName
| sort by TimeGenerated asc nulls last

image

Table 5

Lastly, connect this to your various alerts using the join to alerts from the last 30 days to see what alerts are associated:

SecurityEvent
| where TimeGenerated >= ago(30d)
| where Process contains "powershell.exe"  and CommandLine contains " -enc"
| extend b64 = extract( "[A-Za-z0-9|+|=|/]{30,}", 0,CommandLine)
| extend utf8_decode=base64_decodestring(b64)
| extend decode =  replace ("\x00","", utf8_decode)
| where decode contains 'Gzip' or decode contains'IEX' or decode contains 'Invoke' or decode contains '.MemoryStream'
| summarize by TimeGenerated, Computer=toupper(Computer), Account, decode, CommandLine
| join kind= inner (
      SecurityAlert | where TimeGenerated >= ago(30d)
      | extend ExtProps = parsejson(ExtendedProperties)
      | extend Computer = toupper(tostring(ExtProps["Machine Name"]))
      | project Computer, AlertName, Description
) on Computer

image

Table 6

Security Center uses Azure Log Analytics to help you detect anomalies in your data as well as expose common hiding techniques used by attackers. By exploring more of your data through directed queries like these presented above, you may find anomalies that are both malicious and benign, but in doing so you will have made your environment more secure and have a better understanding of the activity that is going on systems and resources in your subscription.

Learn more about Azure Security Center

To learn more about Azure Security Center’s detection capabilities, visit our threat detection documentation.

To learn more about Azure Advance Threat Protection, visit our threat protection documentation.

To learn more about integration with Windows Defender Advanced Threat Protection, visit our threat protection integration documentation.

To stay up-to-date with the latest announcements on Azure Security Center, read and subscribe to our blog.

4 Best Practices to Get Your Cloud Deployments GDPR Ready

The content below is taken from the original ( 4 Best Practices to Get Your Cloud Deployments GDPR Ready), to continue reading please visit the site. Remember to respect the Author & Copyright.

Get Your Cloud Deployments GDPR Ready

With GDPR coming into force later this month, security and compliance will be the top-most priority for any cloud deployment that contains personal data of EU citizens.

While leading providers have moved to make their platforms and services compliant, ensuring compliance requires more than just technology. Companies will also need to invest time and resources to prepare internal cloud teams to correctly and effectively design secure, auditable, and traceable cloud solutions that also meet the demands of your business. Here are 4 steps to get your cloud deployments GDPR ready for compliance.

#1 Make sure your cloud partners are GDPR compliant

In the cloud, the entire security framework operates under a shared responsibility model between the provider and the customer.

From an infrastructure perspective, the cloud service provider is responsible for providing a secure cloud environment, from their physical presence to the underlying resources that provide compute, storage, database, and network services.

Customers who import data and utilize the provider’s services are responsible for using them to design and implement their own security mechanisms such as access control, firewalls (both at the instance and network levels), encryption, logging, and monitoring.

Under GDPR, both customers (as controllers who define how and why personal data is collected) and cloud providers (as processors who manage, process, or store personal data on behalf of the controller) must be compliant.

To date, AWS, Google Cloud, Microsoft Azure have announced their compliance (in the case of AWS) and of their commitment to GDPR (Google and Microsoft) by the May 25 deadline.

Enterprises should make sure that their cloud partners and any third party that processes, manages, or stores personal data of EU citizens on their behalf have the proper compliance and controls in place.

#2 Audit your systems for personal data

Personally identifiable information (PII) as defined by GDPR includes a range of data types, from names, email addresses, and phone numbers, to photos, genetic data, and IP addresses. But how much of the personal data that you store is actually required for your business?

GDPR is an opportunity to take a critical look at the types of data you collect and why. Use cloud services like AWS’s Amazon Macie to audit and assess the type of data currently in your data stores and determine which ones will be impacted by GDPR. Do they contain data that is outdated or personal data that is unnecessary for your business? Take this opportunity to redefine your processes for the type of data that you will collect going forward.

#3 Put proactive security services in place

A cloud security breach is more than just the loss of data. Exposed S3 buckets and other high-profile breaches that left millions of pieces of PII exposed in 2017 could prove fatal for a business under the new regulations. Under GDPR, a breach that results in exposure of personal data could result in fines of up to 4% of annual turnover or €20 million.

GDPR is an opportunity for companies to implement broader, more comprehensive cloud security and data protection in your deployments at every level. Amazon Web Services, Microsoft Azure, and Google Cloud Platform each have a range of services in place to support your security and compliance requirements. These include:

  • Access: Identity and access management (IAM) mechanisms allow you to provide granular levels of permissions to any given user, group, and service. Multi-factor authentication should also be used for any user with an elevated set of permissions.
  • Encryption: Encryption should be used where possible for any data at rest and in transit. Encryption in transit should be used when transferring data to and from the cloud and when moving data between internal cloud services using protocols such as TLS (Transport Layer Security). The leading cloud service providers offer specific services that allow you to manage data encryption: AWS’s Key Management Service, Microsoft Azure’s Key Vault, and Google Cloud Platform’s Cloud Key Management Service.
  • Monitoring: Use monitoring services to identify changes in the environment, security loopholes, noncompliant resources, malicious activity, irregular trends, or brute force attacks. AWS has a range of services including CloudTrail and Amazon CloudWatch, Azure has Monitor and the Azure Security Center, while Google offers Stackdriver and Cloud Security Scanner.
  • Threat detection: Specific services that analyze log data—for data flows, events, DNS—are designed to identify threats. New “intelligent” services such as AWS GuardDuty assesses log data against multiple security feeds to detect suspicious activity in traffic, malicious URLs, etc.

#4 Empower teams for compliance

A regulation as far-reaching as GDPR will impact your organization at the technology, process, and people levels. A shared understanding by your teams of the regulation and how it impacts your organization from the point of view of technology and the business will be an essential component of your compliance efforts.

  • Make sure your planning addresses your GDPR training needs for both the general concepts and the required skills and experience that teams will need to implement the appropriate levels of compliance and security in your cloud services.
  • Start by instilling a culture of transparency around adherence to security best practices in each organizational unit that touches any cloud initiative.
  • Identify any skill gaps and implement measurable, performance-driven training plans to keep skill development on track.
  • Create a continuous training strategy to ensure that team knowledge and skills stay ahead of the next disruption and that teams are up to date with the latest vendor releases, privacy policies, and best practices.

A best practices approach will be key to get your cloud deployments GDPR ready and to prepare for any security and compliance challenges that your business will face.

Google Maps Platform now integrated with the GCP Console

The content below is taken from the original ( Google Maps Platform now integrated with the GCP Console), to continue reading please visit the site. Remember to respect the Author & Copyright.

Thirteen years ago, the first Google Maps mashup combined Craigslist housing data on top of our map tiles—before there was even an API to access them. Today, Google Maps APIs are some of the most popular on the internet, powering millions of websites and apps generating billions of requests per day.

Earlier this month, we introduced the next generation of our Google Maps business—Google Maps Platform—that included a series of updates to help you take advantage of new location-based features and products. We simplified our APIs into three product categories—Maps, Routes and Places—to make it easier for you to find, explore and add new features to your apps and sites. In addition, we merged our pricing plans into one pay-as-you go plan for our core products. With this new plan, you get the first $200 of monthly usage for free, so you can try the APIs risk-free.

In addition, Google Maps Platform includes simplified products, tighter integration with Google Cloud Platform (GCP) services and tools, as well as a single pay-as-you-go offering. By integrating with GCP, you can scale your business and utilize location services as you grow—we no longer enforce usage caps, just like any other GCP service.

You can also manage Google Maps Platform from Google Cloud Console—the same interface you already use to manage and monitor other GCP services. This integration provides a more tailored view to manage your Google Maps Platform implementation, so you can monitor individual API usage, establish usage quotas, configure alerts for more visibility and control, and access billing reports. All Google Maps Platform customers now receive free Google Maps Platform customer support, which you can also access through the GCP Console.

Check out the Google Maps Platform website where you can learn more about our products and also explore the guided onboarding wizard from the website to the console. We can’t wait to see how you will use Google Maps Platform with GCP to bring new innovative services to your customers.

How to Deploy An Azure Virtual Machine (May 2018)

The content below is taken from the original ( How to Deploy An Azure Virtual Machine (May 2018)), to continue reading please visit the site. Remember to respect the Author & Copyright.

This post will show you how you can quickly deploy an Azure virtual machine for evaluation purposes.

 

 

Before You Continue

It is actually very easy to next-next-next your way through the process of building a virtual machine in Azure. The “wizard” has been designed for newbies to get something up and running quickly. However, the results are not what anyone would recommend for production. Every next-next-next deployment will produce a virtual machine that has its own network security rules, public IP address with direct RDP/SSH access from the Internet, and so on.

In the training that I deliver, I strongly urge people to pre-create things such as their network, a diagnostics storage account, and remote/on-premises connectivity; then, when they create virtual machines in the Azure Portal, they tweak the wizard to use the already-created resources.

In this post, I will walk you through the default process at a high level. Note that Microsoft is constantly renaming and moving things around in the Azure Portal, so things might have changed since this post was written.

Starting Off

Log into the Azure Portal and click the button (highlighted below) in the top-right corner to make sure you are working in the correct customer tenant and Azure subscription.

Switch customer tenant and Azure subscription in the Azure Portal [Image Credit: Aidan Finn]

Switch Customer Tenant and Azure Subscription in the Azure Portal [Image Credit: Aidan Finn]

Now you will start the process of creating a virtual machine. Click Create A Resource in the top-left corner to open the New blade. If you click Compute, the results are filtered for things that use processors in Azure, such as virtual machines and Service Fabric. You can search for an operating system image or an operating system/application image from the Azure Marketplace. You can also click See All to browse the Azure Marketplace. In my example, I am selecting Windows Server 2016 Datacenter.

Create Virtual Machine Blade

A Create Virtual Machine blade opens. Here you will go through a number of steps (blades) to deploy your new virtual machine; the actual blades will depend on what you selected to deploy. For example, a network virtualization appliance such as a Check Point Firewall, will have some configurations that are specific to it. A virtual machine running SQL Server might allow advanced configurations for the SQL Server workload. Typically, you will find the following blades:

  • Basics: Start the process of creating the virtual machine.
  • Size: Choose the Azure virtual machine series and size.
  • Settings: Configure storage, networking, and more of the virtual machine.
  • Summary: View a summary of your configuration and confirm the creation.
The standard Create Virtual Machine blade [Image Credit: Aidan Finn]

The Standard Create Virtual Machine Blade [Image Credit: Aidan Finn]

Basics Blade

In this blade, you will configure some naming and location settings, as well as setting up the default local administrator account. The following settings should be configured:

  • Name: This is the name of the Azure virtual machine. This will be the name of the Azure resource and the computer account name.
  • VM Disk Type: This can be HDD (Standard) or SSD (Premium) and configures the format of the OS disk.
  • User Name: This is the name of the default local administrator account. Note that you cannot use administrator, admin, root, and so on.
  • Password and Confirm Password: Enter the password, which must be between 12 and 123 characters long, and must have 3 of the following – 1 lowercase character, 1 uppercase character, 1 number and 1 special character (not \ or -). Linux gives you the option to use an SSH key instead.
  • Subscription: Confirm that you are creating the virtual machine in the correct subscription.
  • Resource Group: Either create a new resource group for the virtual machine (and all the resources that will be created) or select an existing one that you have rights to.
  • Location: Select the Azure region that the machine will be deployed into.

I want to highlight one setting with the title of Save Money. You can save up to 40 percent on the cost of a Windows virtual machine if you have the Software Assurance benefit of Hybrid Use Benefit (HUB). If you’re not sure about this, then please confirm if you have the rights to click the button with your administrators, resellers, distributor or LSP/LAR. You don’t want to be hit by an auditor for misusing this button!

Click OK when you are ready to move onto the Size blade.

Creating a new Azure virtual machine – Basics blade [Image Credit: Aidan Finn]

Creating a New Azure Virtual Machine — Basics Blade [Image Credit: Aidan Finn]

Size Blade

A blade called Choose A Size appears next; this blade recently went through an upgrade:

  • A search tool
  • Filtering based on Compute Type, Disk Type, and min-max vCPUs
Searching for an selecting an Azure virtual machine size [Image Credit: Aidan Finn]

Searching for and Selecting an Azure Virtual Machine Size [Image Credit: Aidan Finn]

Note at the time of writing, the Temporary Storage (temp drive) column was misleadingly labeled as Local SSD. The size of the OS drive is either 30GB or 128GB depending on what OS image you selected and has nothing to do with what you select here.

Search for and pick an image size. Click Select to continue to the Settings blade.

Settings Blade

The Settings blade is the most detailed on in the standard set of blades for creating a virtual machine. It is so detailed that it has a scroll bar to get you from top to bottom. This is also where a lot of things are dumbed down for you by supplying defaults; these are the defaults that I teach people to undo in my classes. We’ll start with some availability, storage, and networking stuff.

  • Availability Zone: If you want to spread virtual machines around different availability zones (1 or more data centers per zone), then you can choose this option if it is available in the selected Azure region (Location from the Basics blade).
  • Availability Set: If you want to spread your virtual machines around different fault domains and update domains of a compute cluster (in a single data center), then you can create or select an availability set. You cannot do this afterward without recreating the machine from its disk(s).
  • Use Managed Disks: Ideally, you will. However, note that it is difficult to move managed disks to another resource group or subscription. Otherwise, you will create unmanaged disks (fewer management features) in a general purpose storage account. Your previous choice of SSD/HDD will configure the disk tier. You will add any data disks to the virtual machine after it is created.
  • Virtual Network: By default, a new virtual network will be created. You can select an existing one.
  • Subnet: By default, the only subnet of the default (new) virtual network will be selected. You can choose a different virtual network/subnet combination.
  • Public IP Address: A new PIP will be created for connecting to this machine from the Internet.
  • Network Security Group (Firewall): A new NSG will be connected to the NIC of the virtual machine, providing Layer-4 security.
  • Extensions: None are added by default but you can add some extensions. Note that extensions can take quite some time to install and I have found that being too ambitious will cause the VM creation to timeout and fail.

My first note on this blade so far: Availability Zone and Availability Set are mutually exclusive. You can do one or the other, or not do either.

My second note: I normally:

  • Create the VNet and subnet myself and then select them.
  • Don’t associate an NSG with the NIC but associate it with the subnet, treating each subnet as a security zone.
Settings of a new Azure virtual machine – part 1 [Image Credit: Aidan Finn]

Settings of a New Azure Virtual Machine — Part 1 [Image Credit: Aidan Finn]

If you scroll down, you’ll find more settings to configure:

  • Auto-Shutdown: This is a nice setting for demo/lab machines because shutting down (deallocating) a virtual machine when it’s not needed reduces the per-minute charges for virtual machines. In production systems, you should use Azure Automation instead. If you enable this setting, then you can select a time of day to shut down this machine (including time zone) and choose if you want to send a notification email before the shutdown (optional skip/delay actions).
  • Boot Diagnostics: This, enabled by default, captures a BMP screenshot of the machine’s console and stores it in a storage account. A serial log is generated from the virtual machine with some guest OS information. It also enabled serial console access to the virtual machine without network connectivity. A general purpose storage account is created for you. However, I normally recommend creating 1 for each resource group beforehand and selecting it.
  • Guest OS Diagnostics: This is off by default but I recommend turning it on. It also requires a general purpose storage account to store performance metrics in Table storage. One will be created for you but I recommend using the diagnostics storage account (see previous).
  • Managed Service Identity: This is a new feature that tells Azure to maintain an account for the virtual machine in Azure AD. This account can be used to authorize access to other Azure resources.
  • Backup: This is disabled by default but it should always be turned on for any virtual machine that has any value to you. By default, when enabled, it will create a Recovery Services Vault for you but you can select one that you already have.

When you are ready, click OK to save your Settings configuration.

Settings of a new Azure virtual machine – part 2 [Image Credit: Aidan Finn]

Settings of a New Azure Virtual Machine — Part 2 [Image Credit: Aidan Finn]

Summary/Create Blade

The final blade does two things:

  1. Validates that everything you have selected is possible – as much as it can be before a deployment.
  2. Provides you with a summary that you can check before continuing.

There are two things to note.

  • There is a checkbox to give Microsoft permission to use and share your contact information. If you want a call from their “Inside Sales” group, then check this box.
  • You can download a JSON/ARM template for recreating this machine without the wizard. To be honest, this template is unusable without considerable editing.

Click OK and your request to create a new virtual machine will be submitted to Azure. If you click the Notifications icon (a bell) in the top right, you can track the progress of the deployment job. This can take anywhere from 2-15 minutes, for simple Windows or Linux machines, depending on the requested configuration.

The post How to Deploy An Azure Virtual Machine (May 2018) appeared first on Petri.

A $400 Microsoft Surface may be on the way

The content below is taken from the original ( A $400 Microsoft Surface may be on the way), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft’s always taken a premium approach to its Surface line, showing users what its operating system can do when run on top of the line hardware. It’s a model that makes sense for a company with so many ties to third-party hardware manufacturers. But the line that’s been so focused on the high-end needs of “creative professionals” may be getting a budget addition in the near future.

According to a new report from Bloomberg, Microsoft is eyeing the end of the year to release a $400 version of the Surface designed to compete more directly with Apple’s ubiquitous tablet. Of course, many have tried and largely failed to take on the iPad — including Microsoft itself.

The company launched the Surface RT half a decade ago, without making much of a splash. These days, the tablet herd has thinned a bit, and Microsoft has established itself as a maker of premium first-party hardware.

The new device is said to sport a 10-inch screen, putting it in direct competition with Apple’s lower-priced iPad. At $400, Microsoft’s entry would run $70 more than the budget iPad’s starting price, but would still run considerably less than the $799 Surface Pro. And this being Microsoft, there are expected to be multiple SKUs. The devices reportedly won’t ship with a keyboard cover — one of the Surface’s biggest selling points — though they’ll all sport a kickstand and feature a USB C port for charging.

Microsoft, naturally, won’t respond to queries about the device, which is reportedly set for a release in the second half of this year. Given the company’s recent push with Windows 10S, the product could certainly make sense as part of the company’s push into low priced devices for the education market.

Boffins build smallest drone to fly itself with AI

The content below is taken from the original ( Boffins build smallest drone to fly itself with AI), to continue reading please visit the site. Remember to respect the Author & Copyright.

Hand-sized quadrotor packs a neural network

A team of computer scientists have built the smallest completely autonomous nano-drone that can control itself without the need for a human guidance.…

Windows for Workgroups 3.11 in 2018

The content below is taken from the original ( Windows for Workgroups 3.11 in 2018), to continue reading please visit the site. Remember to respect the Author & Copyright.

It’s been 25 years since Microsoft released Windows for Workgroups 3.11. To take a trip back to the end of the 16-bit era of operating system, [Yeo Kheng Meng] got WFW 3.11 running on a modern Thinkpad.

To make things difficult, a few goals were set for the project. Obviously, this wouldn’t be much fun in a virtual machine, so those were banned. A video driver would be needed, since WFW 3.11 only supports resolutions up to 640×480 in software. Some basic support for sound would be desirable. Finally, TCP/IP networking is possible in WFW 3.11, so networking hardware would allow access modern internet.

[Yeo Kheng Meng] accomplished all of these goals on a 2009 Thinkpad T400 and throughly documented the process. Some interesting hacks were required, including the design of a custom parallel port sound card based on the Covox Speech Thing. Accessing HTTPS web servers required a man-in-the-middle attack to strip SSL, since the SSL support on WFW 3.11 is ancient and blocked by most web servers today.

If you want your own WFW 3.11 laptop, the detailed instructions will get you there. [Yeo Kheng Meng] has also provided the hardware design for the sound card. You can watch a talk on the process after the break.

How to speak Linux

The content below is taken from the original ( How to speak Linux), to continue reading please visit the site. Remember to respect the Author & Copyright.

I didn’t even stop to imagine that people pronounced Linux commands differently until many years ago when I heard a coworker use the word “vie” (as in “The teams will vie for the title”) for what I’d always pronounced “vee I”. It was a moment that I’ll never forget. Our homogenous and somewhat rebellious community of Unix/Linux advocates seemed to have descended into dialects – not just preferences for Solaris or Red Hat or Debian or some other variant (fewer back in those days than we have today), but different ways of referring to the commands we knew and used every day.

The “problem” has a number of causes. For one thing, our beloved man pages don’t include pronunciation guidelines like dictionaries do. For another, Unix commands evolved with a number of different pronunciation rules. The names of some commands (like “cat”) were derived from words (like “concatenate”) and were pronounced as if they were words too (some actually are). Others derived from phrases like “cpio” which pull together the idea of copying (cp) and I/O. Others are simply abbreviations like “cd” for “change directory”. And then we have tools like “awk” that go in an entirely different direction by being named for the surnames of its creators (Alfred Aho, Peter Weinberger, and Brian Kernighan). No wonder there are no consistent rules for how to pronounce commands!

To read this article in full, please click here

DIY Pi Zero Pentesting Tool Keeps it Cheap

The content below is taken from the original ( DIY Pi Zero Pentesting Tool Keeps it Cheap), to continue reading please visit the site. Remember to respect the Author & Copyright.

It’s a story as old as time: hacker sees cool tool, hacker recoils in horror at the price of said tool, hacker builds their own version for a fraction of the price. It’s the kind of story that we love here at Hackaday, and has been the impetus for countless projects we’ve covered. One could probably argue that, if hackers had more disposable income, we’d have a much harder time finding content to deliver to our beloved readers.

[ Alex Jensen] writes in to tell us of his own tale of sticker shock induced hacking, where he builds his own version of the Hak5 Bash Bunny. His version might be lacking a bit in the visual flair department, but despite coming in at a fraction of the cost, it does manage to pack in an impressive array of features.

This pentesting multitool can act as a USB keyboard, a mass storage device, and even an RNDIS Ethernet adapter. All in an effort to fool the computer you plug it into to let you do something you shouldn’t. Like its commercial inspiration, it features an easy to use scripting system to allow new attacks to be crafted on the fly with nothing more than a text editor. A rudimentary user interface is provided by four DIP switches and light up tactile buttons. These allow you to select which attacks run without needing to hook the device up to a computer first, and the LED lights can give you status information on what the device is doing.

[Alex] utilized some code from existing projects, namely PiBunny and rspiducky, but much of the functionality is of his own design. Detailed instructions are provided on how you can build your own version of this handy hacker gadget without breaking the bank.

Given how small and cheap it is, the Raspberry Pi is gaining traction in the world of covert DIY penetration testing tools. While it might not be terribly powerful, there’s something to be said for a device that’s cheap enough that you don’t mind leaving it at the scene if you’ve got to pull on your balaclava and make a break for it.

Openreach consults on how to shift 16m phone lines to VoIP by 2025

The content below is taken from the original ( Openreach consults on how to shift 16m phone lines to VoIP by 2025), to continue reading please visit the site. Remember to respect the Author & Copyright.

Eat your fibre, it’s good for you!

BT’s Openreach has opened a consultation with communications providers (CPs) over preparations for the monumental task of shifting 16 million phone lines to voice over IP by 2025.…

How Planner Synchronizes its Tasks to Outlook’s Calendar

The content below is taken from the original ( How Planner Synchronizes its Tasks to Outlook’s Calendar), to continue reading please visit the site. Remember to respect the Author & Copyright.

Planner Office 365

Planner Office 365

Making Tasks Appear in Your Calendar

Despite rolling out some recent upgrades (and yes, guest access is finally rolling out), the Planner team have left one of the biggest complaints about their product unanswered No option exists to print off a plan, lists of tasks for a plan or a bucket within a plan, or details of the tasks assigned to an individual. It’s a strange oversight for an application designed to help people to organize work.

Of course, you could make the argument that people don’t need old-fashioned printouts to help them manage tasks because they can do this through the Planner browser and mobile clients (for IOS and Android).

But that’s ignoring the fact that some people find it easier to print stuff off and review items on paper. In any case, printing task lists is hardly an act of extraordinary software engineering. Plenty of examples exist within Microsoft for how to format and print task information, including the range of options available in OWA to print calendar and task data.

Planner Synchronization to Outlook

The solution now offered is to synchronize tasks with Outlook calendars. Outlook calendar synchronization is automatically enabled for all Office 365 tenants that have Planner as part of their subscription.

If you don’t want to allow users to synchronize Planner tasks to Outlook, you can disable the feature by following the instructions in this article. I hope Microsoft simplifies this aspect soon as the ability to enable and disable features should be controlled in the Office 365 Admin Center, just like the other applications do.

User Tasks from Planner

When a user decides to connect Planner to Outlook, they go to the My Tasks view in Planner and click the ellipsis menu to reveal the choice to Add “My Tasks” to Outlook calendar (Figure 1).

Planner Add Tasks Outlook

Figure 1: The option to add tasks to Outlook (image credit: Tony Redmond)

Click the button and then select Publish. Planner generates an iCalendar link (Figure 2).

Planner iCalendar

Figure 2: Generating an iCalendar link (image credit: Tony Redmond)

The link looks something like this:

http://bit.ly/2wMnrGQ

Now click Add to Outlook. Planner launches OWA at the Calendar subscription window, copies the iCalendar link, and names the new calendar “Planner-My tasks” (Figure 3). Click Save to continue.

Figure 3: Adding the iCalendar link to Outlook (image credit: Tony Redmond)

OWA creates a new calendar to store the items synchronized from Planner in a folder in the user’s mailbox. Rather bizarrely, no check is done to figure out whether such a calendar folder already exists, but if you try to add the same link twice, Exchange ignores the request and doesn’t create a duplicate folder.

Synchronization Kicks In

With the link in place, Outlook populates the new calendar with details of the user’s “Not Started” and “In Progress” tasks fetched from Planner (Figure 4). You can’t filter the tasks as the connector is configured to fetch all open tasks assigned to the user.

Planner Tasks OWA

Figure 4: Planner tasks in an OWA calendar (image credit: Tony Redmond)

A One-Way Affair

Synchronization is one-way from Planner to Outlook. New items do not synchronize at once because Outlook refreshes Planner data via the connector every three to four hours. You can’t force synchronization to happen. The information synchronized to the calendar for a task includes:

  • Date: Planner items are scheduled for all day calendar slots as Planner bases task assignment on days rather than hours. If a task has a due date, the calendar item is scheduled for that day. If it has both start and due dates, the item is scheduled for that period.
  • Location: None added as Planner does not capture this data.
  • Progress: The status of the task in Planner.
  • Checklist: a note about how many checklist items exist and are complete.

Calendar entries created through the connector do not tell you what plan or bucket within a plan a task belongs to, but each entry has a link to Planner (Figure 5) to bring the user back to the original task., where they can make whatever changes are necessary. The changes will then be synchronized back to Outlook.

Planner Link Task

Figure 5: Details of a calendar item synchronized from a Planner task (image credit: Tony Redmond)

Although the user cannot edit the calendar entry in Outlook, they can add a reminder.

The Flow Option

If you prefer not to use the iCalendar connector, you can also link Outlook to Planner using several standard Flow templates published by Microsoft. Flow is good at inserting items into a calendar. It is less successful at tracking changes made to tasks such as completing tasks or synchronizing changes made to a task like changing its status. This is probably due because Planner only publishes three “triggers” for connectors to fire on, and don’t include triggers for task edits.

Imperfect but Acceptable

I think synchronizing Planner tasks to Outlook is an imperfect but acceptable solution. It’s imperfect because of the delay in synchronization and its one-way nature, but the Planner developers are constrained by the way iCalendar links bring information into Outlook calendars. At least this solves the problem of being able to print off a task list because OWA and Outlook desktop both have the necessary functionality, and it’s good to be able to see tasks shown alongside other commitments that people have in their calendars.

Follow Tony on Twitter @12Knocksinna.

Want to know more about how to manage Office 365? Find what you need to know in “Office 365 for IT Pros”, the most comprehensive eBook covering all aspects of Office 365. Available in PDF and EPUB formats (suitable for iBooks) or for Amazon Kindle.

The post How Planner Synchronizes its Tasks to Outlook’s Calendar appeared first on Petri.

Microsoft Goes Back to the Drawing Board with the Surface Hub 2

The content below is taken from the original ( Microsoft Goes Back to the Drawing Board with the Surface Hub 2), to continue reading please visit the site. Remember to respect the Author & Copyright.

A few weeks back, Microsoft announced that the Surface Hub stock was running low and that the device would soon be nearly impossible to buy. The reason why stock is running short is that the company is working on the second gen device.

This week, Microsoft has begun more openly talking about the new hardware and we finally have our first look at the new device. It should be noted that we don’t have a full list of specs at this time, the company is still working on finalizing the hardware which means they aren’t ready to share the weight or internal specs of the display but this early look at the hardware will certainly raise a few eyebrows.

Microsoft has re-thought a lot about how displays function in the collaboration space and we are seeing quite a few changes in this new device. For starters, the hardware only comes in one size, 50.5in, and the aspect ratio now matches that of the rest of the Surface lineup at 3:2.

And Microsoft loves their hinges and this device is no different, it can now rotate 90 degrees.

The resolution is higher than 4k which means that there shouldn’t be any issue with clarity when standing up close to the hardware. Pen input is still available as well and I’m told the input accuracy should match that of other Surface products.

Microsoft is also introducing biometric login support with a fingerprint reader which is designed as a method of enabling two-factor authentication.

With this new hardware, we are also seeing Microsoft push the ‘Microsoft 365′ branding as well. The company says to get the most out of this new hardware, using Microsoft 365 will provide the best experience but know that you don’t need to be paying for every Microsoft service to find value in this new product.

While the device does only come in one size, multiple Hubs can be paired together to create a larger canvas for collaboration; Microsoft calls this ’tiling’ The company believes that this approach will serve its customers better, rather than having two fixed iterations.

But the major change here is the ability to turn the device 90 degrees which makes holding video calls more natural and for when devices are on a cart, it creates an easel-like experience. The company is hoping that by increasing the flexibility of the hardware, that it will find its way into more offices around the globe.

As for the software, Microsoft tells me that it is running an iteration of Windows 10 and that UWP apps will run as well but we will need to wait for the full spec-release to better understand how this machine is operating behind the glass.

Surface Hub 2 will start heading out to customers later this year but only in a small pilot group. If you want to buy one of these new devices, look for availability coming in 2019 but with pricing not yet finalized, getting your purchase order approved could be a bit more difficult.

On paper, this device looks excellent. Microsoft has clearly listened to customer feedback and is working on making the device more affordable as well. I eagerly look forward to getting some hands-on time with this hardware in the near future as the promise on paper is encouraging and I can only hope that the device in practice holds up to the high bar the Surface brand has established.

The post Microsoft Goes Back to the Drawing Board with the Surface Hub 2 appeared first on Petri.

HP’s new Envy PC is the first all-in-one with Alexa built-in

The content below is taken from the original ( HP’s new Envy PC is the first all-in-one with Alexa built-in), to continue reading please visit the site. Remember to respect the Author & Copyright.

If you treat your all-in-one PC as the central hub of your home, shouldn't it double as a voice control hub, too? HP thinks so. It's launching a new version of its 34-inch curved Envy all-in-one with Amazon's Alexa built-in — the first AIO with Alex…

FPGA-driven Raspberry Pi add-on enables overlays on encrypted video

The content below is taken from the original ( FPGA-driven Raspberry Pi add-on enables overlays on encrypted video), to continue reading please visit the site. Remember to respect the Author & Copyright.

Alphamax is crowdfunding an open source “NeTV2” video development add-on board for the Raspberry Pi with an Artix-7 FPGA, 4x PCIe lanes, 2x HDMI inputs, 2x HDMI outputs, and Python programming for overlaying content on encrypted video signals. Back in 2016, hardware hacker Bunnie Huang joined with the Electronic Frontier Foundation (EFF) to sue the […]