Microsoft finally starts doing something with LinkedIn by integrating it into Office 365

The content below is taken from the original (Microsoft finally starts doing something with LinkedIn by integrating it into Office 365), to continue reading please visit the site. Remember to respect the Author & Copyright.

Last year, Microsoft bought LinkedIn for $26.2 billion, but even though the acquisition has long closed, Microsoft hasn’t yet done much with all of the data it gets from the social network. At its Ignite conference in Orlando, Florida, the company announced some first steps in integrating LinkedIn’s social graph with its Office products.

Now don’t get too excited yet. What we’re talking about here is the integration of LinkedIn data with Office 365 profile cards. So assuming you don’t know much about your professional contacts and colleagues yet, you can now see more information about them right in Office 365 without having to go to their LinkedIn profiles (and potentially showing up as that one person who looked at their LinkedIn profile that week, which will surely trigger yet another LinkedIn email for them).

As Microsoft spokesperson Frank X. Shaw noted during a press briefing ahead of the event, the idea behind integrating the Microsoft Graph and the LinkedIn Graph is about creating a more modern workplace. “This will result in experiences like having LinkedIn content integrated with the Office 365 profile card,” he said. “So for example, before you go into an interview, information about that person from LinkedIn will show up in their contact card inside your Outlook Calendar in Office 365.”

All of this sounds a bit like Microsoft spent $26.2 million to save you a click. But there is more. Soon, Dynamics 365 for Sales, Microsoft’s CRM solution, will get the same profile integration and its users will be able to send LinkedIn InMails and messages directly from within Dynamics. Do I hear you saying that this is evidence that sometimes dreams really do come true? Well, yes, indeed. They do.

Built-in security and operations management for Azure and hybrid environments

The content below is taken from the original (Built-in security and operations management for Azure and hybrid environments), to continue reading please visit the site. Remember to respect the Author & Copyright.

The growth of cloud infrastructure usage has been tremendous in the last couple years. In my conversations with customers, many are looking for technologies to help with cloud security and cloud management. More customers are asking for management that is rooted in the cloud and really designed for the new cloud paradigm. At Microsoft we are your trusted partner for enterprise today and in the future, and we are in a unique position where we build both a cloud platform and have a long history of delivering management and security services.

With Azure we are blurring the lines between the traditional categories of platform and management as we deliver an open cloud platform that has built-in security and operations management – and can still meet the needs of our largest enterprise customers. Our customers benefit from this approach with a simpler experience across the full security and operations management lifecycle. We also recognize the importance of building tools that manage and secure not just Azure but also your traditional workloads, and that’s why we are focused on delivering hybrid capabilities.

Today I’m excited to announce several new services and features across these areas:

  • Azure Cost Management by Cloudyn available for free. Azure Cost Management helps organizations manage and optimize cloud spend across Azure, AWS and Google Cloud Platform. Cost management has been one of the most popular requests from our customers and I’m excited to announce that it is now available for free to Azure customers and partners to manage Azure spend. Learn more about Azure Cost Management by Cloudyn.
  • Azure Security Center protection for hybrid workloads. Azure Security Center helps you protect workloads running in Azure from cyber threats and can now also be used to secure workloads running on-premises and in other clouds. Today we are releasing new capabilities to better detect and defend against advanced threats, automate and orchestrate security workflows, and streamline investigation of threats. Learn more about Azure Security Center updates.
  • Integration of management into the virtual machine experience in the Azure portal. This new experience simplifies the process of adding backup, site recovery, monitoring, update management and more to your existing virtual machines.
  • Update management, configuration management & change tracking included at no cost for Azure customers to help you manage missing updates and track configuration changes efficiently across Windows and Linux virtual machines in Azure, and across your hybrid environments. Python support has been added to the Automation service in addition to the existing PowerShell & Graphical authoring capabilities to make it easier to automate both Windows and Linux environments. Learn more about Azure Automation and configuration updates.
  • End to end monitoring from the application to the infrastructure. The new Azure monitor user experience centralizes the monitoring services together so that you can get visibility across infrastructure and applications. In addition, we have significantly optimized your experience for Azure Log Analytics, as well as with metrics exploration, application performance monitoring, and failure diagnostics in Application Insights. We have also integrated Azure alerts with IT Service Management tools and released new solutions for Container Monitoring. Learn more about Azure Monitoring updates.
  • Azure Policy to help you deliver governance and compliance. The new Azure Policy service, now in limited preview, helps you establish standards, guardrails, and continually monitor compliance to deliver enterprise-wide governance. Azure policies can be applied over your Azure resources, from a single subscription to a management group with control across your entire organization. Sign up for the Azure Policy limited preview.
  • PowerShell support in Azure Cloud Shell complements Bash as another authenticated, browser-based shell tool to streamline your Azure management experience. Learn more about PowerShell in Azure Cloud Shell.

The importance of securing and managing your cloud workloads

In this world where customers expect to do business with you 24×7 and threats are only getting more sophisticated, we recommend that at a minimum you turn on security, backup and monitoring for your virtual machines. The Azure platform is designed to reduce your security and operations management burden for building, maintaining, and securing the datacenters, but as a customer you can partner with us to ensure that your Azure resources are secure and well-managed with the right security and compliance controls in place. 

I hope you will join me at Microsoft Ignite, either in person or virtually, to see these new features and updates in action. I’m excited to hear from you on how you are securing and managing your resources in the cloud and encourage you to continue sending us feedback. You can create a free account to get started exploring Azure security and operations management today.

Microsoft and Facebook’s massive undersea data cable is complete

The content below is taken from the original (Microsoft and Facebook’s massive undersea data cable is complete), to continue reading please visit the site. Remember to respect the Author & Copyright.

Last year, we reported that Microsoft and Facebook were teaming up to build a massive undersea cable that would cross the Atlantic, connecting Virginia Beach to the northern city of Bilbao in Spain. Last week, Microsoft announced that the cable, called Marea, is complete.

Marea, which means "tide" in Spanish, lies over 17,000 feet below the Atlantic Ocean’s surface and is around 4,000 miles long. It weighs 10.25 million pounds. The data rates (which let’s face it, that’s what we’re all really interested in) are equally staggering: Marea can transmit at a rate of 160 TB/second. And it was finished in less than two years.

What’s really interesting about Marea, though, is that it has an open design. This means that Microsoft and Facebook are trying to make the cable as future proof as possible. It can evolve as technology changes and demands increase for more data and higher speeds. Its flexibility means that upgrading the cable and its equipment to be compatible with newer technology will be easier.

If you’re interested in learning more about Marea, you can watch the recorded livestream of a celebration of the cable that happened last Friday. It’s nice to see tech companies working together, and on big projects that will help them meet future demands for Internet usage.

Source: Microsoft

Microsoft’s new Data Box lets you mail up to 100 TB to its Azure cloud

The content below is taken from the original (Microsoft’s new Data Box lets you mail up to 100 TB to its Azure cloud), to continue reading please visit the site. Remember to respect the Author & Copyright.

Moving lots of data to the cloud can take a long time and cost quite a bit, even over fast connections. Like its competitors — and especially AWS — Microsoft has long allowed its Azure users to import data to its cloud by shipping hard drives to its data centers. It’s now going a step further with the preview launch of the 100 TB Azure Data Box, its answer to AWS’s 50 and 80 TB Snowball boxes.

The 45 pound Azure Data Box is meant to allow enterprises to quickly and securely move their data into the cloud. Microsoft describes it as a “ruggedized, tamper-resistant and human-manageable applications that will help organizations overcome the data transfer barriers that can
block productivity and slow innovation.”

Users order the box from the Azure portal, load it up with their data and then ship it to Microsoft for ingestion into the Azure cloud. Like the AWS Snowball, the Data Box also features a e-paper display that functions as the shipping label.

The box plugs right into the data center network and supports standard protocol like SMB and CIFS. Data on the box itself is secured with with a 256 bit AWS encryption.

Given that we’re talking Microsoft here, it doesn’t come as a surprise that the company is also working with partners like Commvault, Veritas, NetApp, Avid, Rubrik and CloudLanes, who will all integrate their services with the Data Box.

Micrososft also tested the box with the help of Oceaneering International, a company that, among other things, offers ROV-based subsea inspection services for the oil industry. Those robots can easily create multiple terabytes of data per day but aren’t always connected to the internet. In Oceaneering International’s use case, the company stores its data on the Data Box, upload the data to Azure and then prepare it for its own customers.

VIDEO

 

Microsoft’s all-in-one 365 subscription is available for schools

The content below is taken from the original (Microsoft’s all-in-one 365 subscription is available for schools), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft just launched its latest bid to bring its services into every aspect of schools and the workforce. To begin with, it’s offering its all-encompassing Microsoft 365 subscription to education. Schools can pay a single per-person rate to get Windows 10, Office 365, the Enterprise Mobility and Security Suite and even Minecraft: Education Edition. Office 365 for Education is already free, but Microsoft is betting that all the other perks will be worth it for faculty that wants a one-stop shop for the software they need. It’ll be available on October 1st — too late for the return to school, so don’t be surprised if you don’t see this used in earnest until the winter semester or next fall.

The company is also bending over backwards to court "firstline workers," or the people you actually meet when you interact with a company (such as sales reps and clerks). There’s a version of Microsoft 365 for them (no Minecraft, sadly), but they’re also getting a slew of Windows 10 S laptops that take advantage of the platform’s Windows Store-only app access to boost security. More than anything, the focus here is on cost: the Acer Aspire 1 and Swift 1, HP Stream 14 Pro and Lenovo V330 range in price from $275 to $349, and they’re more notable for their no-frills "ultraslim" designs than any clever tricks. You aren’t going to crave one the way you might a Surface Laptop, but that’s not the point — this is about proving that Windows 10 S isn’t just for schools nervous about malware.

There’s also an important update about software that’s going away. Microsoft has announced that it’s focusing on Teams as its main communications tool for business, and it’ll gradually phase out Skype for Business as a consequence. This isn’t a huge shock, especially since Teams and Skype share enough architecture that they can coexist, but it reflects the effect apps like Slack have had on the working world. If Microsoft didn’t offer a direct alternative, there was a risk that it could lose an important part of the market to a startup. Teams won’t necessarily upset Slack’s early lead, but it’s now getting the kind of attention it might need to thrive at your workplace.

Source: Microsoft Education Blog, Windows Business Blog, TechCrunch

Announcing IPv6 global load balancing GA

The content below is taken from the original (Announcing IPv6 global load balancing GA), to continue reading please visit the site. Remember to respect the Author & Copyright.

By Prajakta Joshi, Product Manager, Cloud Networking

Google Cloud users deploy Cloud Load Balancing to instantiate applications across the globe, architect for the highest levels of availability, and deliver applications with low latency. Today, we’re excited to announce that IPv6 global load balancing is now generally available (GA).

Until today, global load balancing was available only for IPv4 clients. With this launch, your IPv6 clients can connect to an IPv6 load balancing VIP (Virtual IP) and get load balanced to IPv4 application instances using HTTP(S) Load Balancing, SSL proxy, and TCP proxy. You now get the same management simplicity of using a single anycast IPv6 VIP for application instances in multiple regions.

Home Depot serves 75% of homedepot.com out of Google Cloud Platform (GCP) and uses global load balancing to achieve a global footprint and resiliency for its service with low management overhead.

"On the front-end, we use the Layer 7 load balancer with a single global IP that intelligently routes customer requests to the closest location. Global load balancing will allow us to easily add another region in the future without any DNS record changes, or for that matter, doing anything besides adding VMs in the right location."  

Ravi Yeddula, Senior Director Platform Architecture and Application Development, The Home Depot

IPv6 support unlocks new capabilities 

With IPv6 global load balancing, you can build more scalable and resilient applications on GCP, with the following benefits:

  • Single Anycast IPv6 VIP for multi-region deployment: Now, you only need one Load Balancer IPv6 VIP for application instances running across multiple regions. This means that your DNS server has a single AAAA record and that you don’t need to load-balance among multiple IPv6 VIPs. Caching of AAAA records by clients is not an issue since there’s only one IPv6 VIP to cache. User requests to IPv6 VIP are automatically load balanced to the closest healthy instance with available capacity.
  • Support for a variety of traffic types: You can load balance HTTP, HTTPS, HTTP/2, TCP and TLS (non-HTTP) IPv6 client traffic. 
  • Cross-region overflow with a single IPv6 Load Balancer VIP: If instances in one region are out of resources, the IPv6 global load balancer automatically directs requests from users closest to this region to another region with available resources. Once the closest region has available resources, global load balancing reverts back to serving user requests via instances in this region. 
  • Cross-region failover with single IPv6 Load Balancer VIP: If the region with instances closest to the user experiences a failure, IPv6 global load balancing automatically directs traffic to another region with healthy instances. 
  • Dual-stack applications: To serve both IPv6 and IPv4 clients, create two load balancer IPs  one with an IPv6 VIP and the other with an IPv4 VIP and associate both VIPs with the same IPv4 application instances. IPv4 clients connect to the IPv4 Load Balancer VIP while IPv6 clients connect to IPv6 Load Balancer VIP. These clients are then automatically load balanced to the closest healthy instance with available capacity. We provide IPv6 VIPs (forwarding rules) without charge, so you pay for only the IPv4 ones.
    (click to enlarge)

A global, scalable, resilient foundation 

Global load balancing for both IPv6 and IPv4 clients benefits from its scalable, software-defined architecture that reduces latency for end users and ensures a great user experience.

  • Software-defined, globally distributed load balancing: Global load balancing is delivered via software-defined, globally distributed systems. This means that you won’t hit performance bottlenecks with the load balancer and it can handle 1,000,000+ queries per second seamlessly. 
  • Reduced latency through edge-based architecture: Global load balancing is delivered at the edge of Google’s global network from 80+ points of presence (POPs) across the globe. User connections terminate at the POP closest to them and travel over Google’s global network to the load-balanced instance in Google Cloud. 
    (click to enlarge)
  • Seamless autoscaling: Global load balancing scales application instances up or down automatically based on traffic  no pre-warming of instances required. 

Take IPv6 global load balancing for a spin 

Earlier this year, we gave a sneak preview of IPv6 global load balancing at Google Cloud Next ‘17. You can test drive this feature using the same setup.

In this setup:

  • v6.gcpnetworking.com is served by IPv4 application instances in multiple Google Cloud regions across the globe. 
  • A single anycast IPv6 Load Balancer IP, 2600:1901:0:ab8:: fronts the IPv4 application instances across regions 
  • When you connect using an IPv6 address to this website, IPv6 global load balancing directs you to a healthy Google Cloud instance that’s closest to you and has available capacity. 
  • The website is programmed to display your IPv6 address, the Load Balancer IPv6 VIP and information about the instance serving your request. 
  • v6.gcpnetworking.com will only work with IPv6 clients. You can test drive gcpnetworking.com instead if you want to test with both IPv4 and IPv6 clients.

For example, when I connect to v6.gcpnetworking.com from California, my request connects to an IPv6 global load balancer with IP address 2600:1901:0:ab8:: and is served out of an instance in us-west1-c, the closest region to California in the set-up.

Give it a try, and you’ll observe that while your request connects to the same IPv6 VIP address 2600:1901:0:ab8::, it’s served by an instance closest to you that has available capacity.

You can learn more by reading about IPv6 global load balancing, and taking it for a spin. We look forward to your feedback!

Microsoft Azure 1st Hyperscale Cloud Computing Platform to Enable UK Law Enforcement Community

The content below is taken from the original (Microsoft Azure 1st Hyperscale Cloud Computing Platform to Enable UK Law Enforcement Community), to continue reading please visit the site. Remember to respect the Author & Copyright.

We’re excited and proud to announce that Microsoft Azure is the first hyper-scale cloud computing platform to be able to service UK law enforcement IT customers. This announcement comes in the wake of the United Kingdom’s National Police Information Risk Management Team (NPIRMT) completing a comprehensive physical security review of a Microsoft UK Data Centre. This review is a necessary step to provide assurance to UK law enforcement agencies that their information management systems would be hosted in Police Approved Secure Facilities (PASF).  

As stated by the College of Policing’s Authorized Professional Practice (APP), “Policing is an information-led activity, and information assurance is fundamental to how the police service manages many of the challenges faced in policing today.” Azure is proud to be recognized in this way as we contribute to the information assurance tapestry needed to enable the UK law enforcement community.

The actual NPRIMT PASF assessment is available to policing customers from the Home Office for individual Police Services to review as part of their own approach to risk assessment in utilizing cloud services.

*= It is important to note that the NPIRMT do not offer any warranty of physical security of the Microsoft data center.

azure-hyperscale-uk

Customizable PCB Business Card

The content below is taken from the original (Customizable PCB Business Card), to continue reading please visit the site. Remember to respect the Author & Copyright.

Customizable PCB Business Card

[Corey Harding] designed his business card as a USB-connectable demonstration of his skill. If potential manager inserts the card in a USB drive, open a text editor, then touches the copper pad on the PCB, [Corey]’s contact info pops up in the text box.

In addition to working as a business card, the PCB also works as a Tiny 85 development board, with a prototyping area for adding sensors and other components, and with additional capabilities broken out: you can add an LED, and there’s also room for a 1K resistor, a reset button, or break out the USB’s 5V for other uses. There’s an AVR ISP breakout for reflashing the chip.

Coolly, [Corey] intended for the card to be an Open Source resource for other people to make their own cards, and he’s providing the Fritzing files for the PCB. Fritzing is a great program for beginning and experienced hardware hackers to lay out quick and dirty circuits, make wiring diagrams, and even export PCB designs for fabrication. You can download [Corey]’s files from his GitHub repository.

For another business card project check out this full color business card we published last month.

Posted in Arduino HacksTagged , , , ,

Twitter is testing a Twitter Lite Android app, first in the Philippines

The content below is taken from the original (Twitter is testing a Twitter Lite Android app, first in the Philippines), to continue reading please visit the site. Remember to respect the Author & Copyright.

Twitter today has nearly four times as many monthly active users outside the U.S. as it does in its home market — 260 million versus 68 million — and this week it quietly launched a new app in an effort to boost those numbers further. The social network is testing a Android app for Twitter Lite, a native app version of a mobile web site Twitter launched earlier this year that uses less mobile data to work. The lighter data load means that the app is especially useful for emerging markets where data networks are often slower and more expensive for consumers to use.

We were alerted to the new Twitter Lite app by analytics firm SensorTower, and we’ve seen a few mentions of it out in the wild:

Twitter has confirmed to us that the app is being run currently in test mode in the Philippines (which is where the Twitter user above is located). There, it appears as a separate app in the Google Play Store for devices running Android 5.0 and up; has language support both for English and Filipino; and is usable on 2G and 3G networks.

“The test of the Twitter Lite app in the Google Play Store in the Philippines is another opportunity to increase the availability of Twitter in this market,” said a Twitter spokesperson. “The Philippines market has slow mobile networks and expensive data plans, while mobile devices with limited storage are still very popular there. Twitter Lite helps to overcome these barriers to usage for Twitter in the Philippines.”

He further described the app as “an experiment” and that Twitter was still evaluating whether to launch it in further markets.

The app itself appears to have many of the same basic functions of the main Twitter apps — “breaking news, sports scores, and entertainment updates. Interact with brands and your government, easily market your business, quickly provide or receive customer service” and options to view your Timeline, Notifications, the Explore tab, Messages and to customise your profile.



But alongside these are a few tweaks that will make it less of a data hog for users: for example, you can switch to a media-free mode to be able to select specific images and videos for downloading.

Indeed, it’s details like this that point to why Twitter expanded the Lite version to apps in the first place: not only do people like to use apps, but the platform gives Twitter a wider set of tools to tinker with the user experience further.

Giving users the option of which media they would like to actually see is a pretty crucial feature for emerging markets.

Twitter has over recent years reoriented itself as a media company, for example cutting deals to livestream events in hopes of capturing more audience and advertising alongside that.

But that full version of the service would be potentially unusable (and probably frustrating) as a result for many people in emerging markets, so Twitter has taken the decision to show these users less in hopes of getting them to use the service more — and to better monetize them on more localized terms.

Ironically, it may not only be emerging market users who flock to the app, as evidenced by who uses Twitter Lite the web app:

Twitter today tells me that the Lite web app offers a significant weight reduction on its standard apps: it uses up to 70 percent less data, is smaller than 1MB in size, and launches up to 30 percent faster. The Android Lite app, meanwhile, when installed uses “under 3MB.”

Twitter has not released any numbers on how much traffic comes from its Twitter Lite web site, built as a Progressive Web App. However, it is not too surprising to see Twitter expanding Lite after CEO Jack Dorsey highlighted its importance for the company in its emerging market and international strategies.

“We’ve been working over the past few months on some early foundational work, and Twitter Lite represents one of these,” he said in the company’s quarterly earnings call in July. “One of our goals is to make sure Twitter is accessible to anyone in the world. And Twitter Lite exactly hits on this particular goal. Especially in places like India, we found that our app was just way too slow to access. So we have areas in the world where network infrastructure is more costly, and we could be a lot better in terms of serving those markets and those countries. So it’s way too soon to access the — to assess the usage trends, but our initial results look really positive.”

Considering how many people use Twitter outside the U.S., and considering Twitter’s numerous and early efforts and wins in emerging markets, the company was somewhat late to the game when it came to launching an official Lite web app, which it did only earlier this year.

But for social networks that are hitting a wall in their domestic and mature market growth, having an app that’s designed for users on lower-end phones and networks is essential.

You can see the pattern of social sites that have followed this route: LinkedIn launched LinkedIn Lite as an Android app earlier this year, and Facebook Lite was at one point the company’s fastest growing app.

Twitter of course, has had a growth problem for years now, so it’s possibly even more urgent that the company rolls the dice sooner rather than later on this one.

 

Featured Image: DragojaGagiTubic / iStock Editorial / Getty Images Plus

OpenStack Developer Mailing List Digest September 16-22

The content below is taken from the original (OpenStack Developer Mailing List Digest September 16-22), to continue reading please visit the site. Remember to respect the Author & Copyright.

Summaries

PTG

Survey/polls

  • Should we have Upstream Institutes at the PTG? yay or nay

Summaries

Gerrit Upgrade Update From Infra

  • Gerrit emails are slow, because it’s sending one at a time
  • Web UI File Editor
    • Behaving oddly. Might be because of API time outs. Gertty is also having reported problems.
  • Message

Install Guide VS Tutorial

  • Since the doc-migration, people have been having questions regarding the usage of “Install Tutorial” and “Install Guide” in the OpenStack manuals repository and project specific repos.
    • The documentation team agrees this should be consistent.
    • Tutorial’s literal translation is “paper, book, film, or computer program that provide practical information about a specific subject.”
  • From PTG discussions, a distinction made was installation provides one of many possible ways to install the components.
  • Consistency is more important than bike shedding over the name.
    • Industry wise, what’s the trend?
  • Thread

Garbage Patches for simple typo fixes

  • Previous thread from 2016 on this
  • Various contributors are doing many patches that are typo, style changes.
    • It has been expressed that this can cause CI resource starvation.
  • TC created the Top 5 help wanted to help contributors know where the community needs the most identified help.
  • This is a social issue, not a technical issue. Arguing about what is useful and what isn’t is probably not worth the effort here.
  • Communication and education is probably the best solution here. For repeated offenders, off-list email could be fine to make sure the communication is clear. Communicating this in the new contributor portal and Upstream Institute would be helpful.
  • Thread

Kaby Lake rugged box-PCs include Linux-ready beast with 9x GbE ports

The content below is taken from the original (Kaby Lake rugged box-PCs include Linux-ready beast with 9x GbE ports), to continue reading please visit the site. Remember to respect the Author & Copyright.

Aaeon unveiled two rugged embedded PCs that run Intel’s 6th or 7th Gen CPUs. The Linux-friendly “BOXER-6640M” stands out with 9x GbE and 8x USB 3.0 ports.

Aaeon’s BOXER-6640M and Boxer-6640 build on the same fanless design, ruggedization features, and support for 6th Gen Skylake and 7th Gen Kaby Lake processors as its recent Boxer-6639. Here, we’ll focus primarily on the Linux-supported, networking oriented BOXER-6640M, which features nine Gigabit Ethernet ports. Farther below, we briefly cover the similar, but dual GbE, Boxer-6640, which is listed only with Windows support. Both computers measure 264.2 x 186.2 x 96.4mm.



Boxer-6640M (left) and Boxer-6640
(click images to enlarge)

The Boxer-6640M can run at -20 to 55°C temperatures, and offer 2 Grms/5-500Hz vibration resistance with an SSD. It also features a wide-range, 9-36V power input with 3-pin terminal block.

CPU support is very close to that of the Boxer-6839. The high-end Kaby Lake option is the Intel Core i7-7700T, which integrates 4x cores and 8x threads clocked to 2.9GHz or to 3.8GHz turbo frequency. The 35W TDP can drop to 25W at 1.9GHz. Also available are the quad-core, quad-threaded Core i5-7500T clocked at 2.8/3.3GHz, and the dual-core, quad-threaded i3-7101TE clocked at 3.4GHz, both with 35W TDPs. You can also equip the system with a Kaby Lake Pentium G4560T chip.

Skylake options include Core i7/i5/i3 and Pentium “TE” models. The system supports Ubuntu 16.04, Fedora 25, and CentOS 7.3 on both Kaby Lake and Skylake chips. On the Microsoft side, the Kaby Lake parts are limited to Windows 10 IoT, while the Skylake models also support older Windows versions.



Boxer-6640M detail views
(click images to enlarge)

Like the Boxer-6839, the Boxer-6640M can load up to 32GB DDR4 RAM, and supports triple display support with a VGA port and dual HDMI ports. It similarly provides audio I/O and a SATA bay, but instead of cFast, you get mSATA on one of the two full-size mini-PCIe slots. The other slot is accompanied by a SIM slot and an antenna mount. Optional wireless modules with antennas include WiFi/Bluetooth, and 4G options for Verizon, AT&T, and European networks.

Other I/O diverges more significantly. The standout features are the 9x GbE ports and 8x USB 3.0 ports. You also get 2x USB 2.0, an isolated RS-232/422/485 port, remote power pins, LEDs, a power button, and optional power adapters.

 
Boxer-6640

The Windows-focused Boxer-6640 has the same size, CPU choices, and up to 32GB DDR4, as the Boxer-6640M, as well as very similar ruggedization features. The main difference is that it’s limited to 2x GBE and 4x USB 3.0 ports, as well as 3x USB 2.0 ports.



Boxer-6640 detail views
(click images to enlarge)

On the plus side, the display support has been expanded with 2x DisplayPorts, and you get an 8-bit DIO port and three RS232 ports in addition to the RS-232/422/485 port. The dual mini-PCIe slots are identical, but there’s no mention of optional wireless modules.

 
Further information

The Linux-ready, 9x GbE Boxer-6640M and Windows-focused, 2x GbE Boxer-6640 appear to be available now with pricing undisclosed. More information may be found on Aaeon’s Boxer-6640M and Boxer-6640 product pages.
 

RISC OS software to download from !Store

The content below is taken from the original (RISC OS software to download from !Store), to continue reading please visit the site. Remember to respect the Author & Copyright.

In previous articles, we looked at package managers and some of the software available on !PackMan. In this article we are going to highlight some of the software available in !Store and ask for your suggestions.

When you run !Store, it offers you a long list of files and includes both free and commercial software (which you can buy via !Store). As with !Packman it gives you a front end to make it easy to search, provide more details and you can select categories.

If you are looking to run old software on a new machine, our old friend Aemulor is available as free download. There is also in interesting Atari ST emulator called Hatari if you want a real ‘GEM’ environment on your machine.

!Store is also the home for the latest version of !Impression. (although I would criticise the broken link which goes nowhere and looks bad. The pages like PMS Music scribe also include broken links).

Some very high quality software originally written by David Pilling is now freely available and you can find this on !Store.

!Store offers more than software, and you will also find fonts and copies of DraG’N’Drop (which we reviewed here).

What are you downloading from !Store?

No comments in forum

UK lifeboat crew tests drones as search and rescue helpers

The content below is taken from the original (UK lifeboat crew tests drones as search and rescue helpers), to continue reading please visit the site. Remember to respect the Author & Copyright.

Drones are becoming an important part of the emergency services. Police are using them to search for missing people, while fire departments test them as a tool to survey dangerous sites. Until now, however, we haven’t seen or heard about them being used by the coastguard. That all changes today, however, as a lifeboat service in Norfolk, England, has started using them in open water. As the BBC reports, they’re equipped with cameras that can live-stream footage to monitors inside the boat. They could prove useful in choppy conditions when the crew can’t see above the waves.

"Normally you’re at sea level trying to look out from the lifeboat," Peter King, a drone expert helping the lifeboat team told the BBC. "The swell is above the boat so you have to wait until you’re on the crest of a wave, and they might be in a trough. They might be 20 meters away and you still can’t find them." In treacherous weather, some lifeboat crews can use helicopters for search and rescue. Drones, of course, are a cheaper alternative, though their size makes them susceptible to strong winds.

Lifeboat service at Caister trial drones

The drones are currently in a testing phase. According to the BBC, the lifeboat crew is in discussion with the Civil Aviation Authority (CAA) about using them full-time in the ocean. At the moment, UK laws require that every pilot keep direct, unaided visual contact with their drone. That roughly equates to 400 feet (122 metres) vertically and 500 metres horizontally. In the water, however, there are times when the team might want to fly the drones further out. "In the past," King told the BBC, "there have been instances where we have been unsuccessful when searching for someone in need of help. Perhaps if we had been equipped with the drone technology, these searches would have had a positive outcome."

Via: BBC News, Huffington Post UK

Introducing faster GPUs for Google Compute Engine

The content below is taken from the original (Introducing faster GPUs for Google Compute Engine), to continue reading please visit the site. Remember to respect the Author & Copyright.

By Chris Kleban and Ari Liberman, Product Managers for Google Compute Engine

Today, we’re happy to make some massively parallel announcements for Cloud GPUs. First, Google Cloud Platform (GCP) gets another performance boost with the public launch of NVIDIA P100 GPUs in beta. Second, NVIDIA K80 GPUs are now generally available on Google Compute Engine. Third, we’re happy to announce the introduction of sustained use discounts on both the K80 and P100 GPUs.

Cloud GPUs can accelerate your workloads including machine learning training and inference, geophysical data processing, simulation, seismic analysis, molecular modeling, genomics and many more high performance compute use cases.

The NVIDIA Tesla P100 is the state of the art of GPU technology. Based on the Pascal GPU architecture, you can increase throughput with fewer instances while saving money. P100 GPUs can accelerate your workloads by up to 10x compared to K801.

Compared to traditional solutions, Cloud GPUs provide an unparalleled combination of flexibility, performance and cost-savings:

  • Flexibility: Google’s custom VM shapes and incremental Cloud GPUs provide the ultimate amount of flexibility. Customize the CPU, memory, disk and GPU configuration to best match your needs.  
  • Fast performance: Cloud GPUs are offered in passthrough mode to provide bare-metal performance. Attach up to 4 P100 or 8 K80 per VM (we offer up to 4 K80 boards, that come with 2 GPUs per board). For those looking for higher disk performance, optionally attach up to 3TB of Local SSD to any GPU VM. 
  • Low cost: With Cloud GPUs you get the same per-minute billing and Sustained Use Discounts that you do for the rest of GCP’s resources. Pay only for what you need! 
  • Cloud integration: Cloud GPUs are available at all levels of the stack. For infrastructure, Compute Engine and Google Container Enginer allow you to run your GPU workloads with VMs or containers. For machine learning, Cloud Machine Learning can be optionally configured to utilize GPUs in order to reduce the time it takes to train your models at scale with TensorFlow. 

With today’s announcement, you can now deploy both the NVIDIA Tesla P100 and K80 GPUs in four regions worldwide. All of our GPUs can now take advantage of sustained use discounts, which automatically lower the price (up to 30%), of your virtual machines when you use them to run sustained workloads. No lock-in or upfront minimum fee commitments are needed to take advantage of these discounts.

Cloud GPUs Regions Availability – Number of Zones


Speed up machine learning workloads 

Since launching GPUs, we’ve seen customers benefit from the extra computation they provide to accelerate workloads ranging from genomics and computational finance to training and inference on machine learning models. One of our customers, Shazam, was an early adopter of GPUs on GCP to power their music recognition service.

“For certain tasks, [NVIDIA] GPUs are a cost-effective and high-performance alternative to traditional CPUs. They work great with Shazam’s core music recognition workload, in which we match snippets of user-recorded audio fingerprints against our catalog of over 40 million songs. We do that by taking the audio signatures of each and every song, compiling them into a custom database format and loading them into GPU memory. Whenever a user Shazams a song, our algorithm uses GPUs to search that database until it finds a match. This happens successfully over 20 million times per day.”   

 Ben Belchak, Head of Site Reliability Engineering, Shazam

With today’s Cloud GPU announcements, GCP takes another step toward being the optimal place for any hardware-accelerated workload. With the addition of NVIDIA P100 GPUs, our primary focus is to help you bring new use cases to life. To learn more about how your organization can benefit from Cloud GPUs and Compute Engine, visit the GPU site and get started today!



The 10x performance boost compares 1 P100 GPU versus 1 K80 GPU (½ of a K80 board) for machine learning inference workloads that benefits from the P100 FP16 precision. Performance will vary by workload. Download this datasheet for more information.

Microsoft Pix uses AI to make whiteboard photos useable images

The content below is taken from the original (Microsoft Pix uses AI to make whiteboard photos useable images), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft’s Pix sets itself apart from other camera apps by using the power of artificial intelligence to correct your photos, learning new tricks over time. It can do things like add artistic flair to your images, turn photos shot in a row into “Live Images,” or just making sure the people in your photos look great. This week, the app got a new update out that adds yet another AI trick to the pile: The ability to capture whiteboards and turn them into useful images.

So, for example, if you’re at an important meeting, you can use Pix to take a photo of a diagram on the whiteboard to remember it later. The Pix app will then sharpen the focus, ramp up the color and tone, crop out the background and realign the image appropriately so that the diagram is shown straight-on.

According to Microsoft, this will work not just on whiteboards, but also documents and business cards as well. It’s a trick that’s very similar to what Microsoft’s own Office Lens app can already do, but while Office Lens is focused on productivity, Pix is more about using AI to recognize whiteboards and documents automatically. Basically, you don’t need to tell Pix that you want the photo of the document to be cropped and realigned — it’ll automatically recognize what it is and will do so without you having to intervene.

Microsoft’s Pix Camera update is available right now on the App Store.

New MusicBrainz virtual machine released

The content below is taken from the original (New MusicBrainz virtual machine released), to continue reading please visit the site. Remember to respect the Author & Copyright.

I have recently released a new MusicBrainz virtual machine. This virtual machine includes all the important bits of MusicBrainz so you can run your own copy! I’d been hoping for feedback if people have encountered any problems with this VM, but I’ve not received any feedback. Here is to hoping that no news is good news!

For information on how to download, install and access this new virtual machine, take a look at our MusicBrainz Server setup page. The new VM can be downloaded from here via direct download or a torrent download.

Most of the outstanding bugs should be fixed in this release — if not, please open a new ticket.

Filed under: musicbrainz, Server, Virtual Machine

What’s New on Cloud Academy: Big Data, Security, and Containers

The content below is taken from the original (What’s New on Cloud Academy: Big Data, Security, and Containers), to continue reading please visit the site. Remember to respect the Author & Copyright.

Big Data, Security, and Containers New on Cloud Academy

Explore the newest Learning Paths, Courses, and Hands-on Labs on Cloud Academy in September.

Learning Paths and Courses

Certified Big Data Specialty on AWS

Solving problems and identifying opportunities starts with data. The ability to collect, store, retrieve, and analyze data meaningfully is essential for every business. This learning path is a great way to prepare for this specialty certification or to level up your team’s big data skills on AWS. The six domains of the exam—collection, storage, processing, analysis, visualization, and data security—are covered through a combination of Video courses, Hands-on Labs, and skill assessments.
AWS Access & Key Management Security

Resource access and data encryption are two of the most important topics in cloud security. AWS Identity and Access Management (IAM) manages identities and their permissions for accessing resources and it is one of the first security services that you will encounter in your cloud environment. AWS Key Management Service (KMS) allows you to easily manage your encryption keys, and is useful both for IT teams who need to keep data secure and auditors and compliance specialists who need to monitor governance. You’ll also learn about AWS security controls for Authentication, Authorization, and Accounting; best practices for security for containers; encrypting S3 and EBS data; and about Amazon’s original encryption key solution, AWS CloudHSM.
Google Cloud Platform for System Administrators

System Administrators maintaining a Google Cloud Platform (GCP) infrastructure will be able to get up to speed on platform essentials in this learning path. GCP excels in providing elastic, fully managed services and offers lots of options for building out highly available and scalable web applications and mobile back-ends. In addition to the fundamental concepts, you will also learn about GCP compute, networking, authorization, storage, and other key services, and we’ll show you how to perform essential maintenance tasks.
Introduction to Azure Container Service

Developers, operations engineers, and DevOps teams will want to know more about how to manage containers at scale in enterprise and production scenarios. Azure Container Service (ACS) provides infrastructure services for Docker Swarm, Mesosphere’s DC/OS, and Kubernetes. You’ll learn how to create and manage VM hosts as clusters for container orchestration, how to deploy, scale, and orchestrate containers through ACS using the main orchestration systems, and more.
Introduction to Google Cloud Machine Learning Engine

If you’ve ever searched for an image on the web or used Google Translate to understand a phrase in another language, you’ve used Google’s machine learning. Google’s Cloud Machine Learning Engine gives users the power to train their own neural networks. With this course, we’ll give you the skills you need to train and deploy machine learning models with the Google ML engine (and you won’t need prior experience in machine learning or knowledge of TensorFlow to get started).

Hands-on Labs

Getting Started with Amazon Simple Notification Service

Amazon’s Simple Notification Services is often used to push messages directly to other AWS services such as AWS Lambda or Simple Queue Service. SNS uses a publish-subscribe model to deliver messages via HTTP/S, SMS, and email to multiple recipients all at once. Integrated with AWS CloudTrail, SNS actions are captured, logged, and delivered to an S3 bucket. This lab is a hands-on introduction to SNS. Working in a live environment, we’ll walk you through the process of creating an SNS, subscribing to a topic, publishing a message and everything in between. Then, we’ll use Amazon Athena to query data from AWS CloudTrail logs.
Getting started with Docker on Linux for AWS

Containers offer many of the benefits of virtual machines but in a much more efficient, less resource-intensive system. Containers allow you to package up an application in an isolated environment that can be executed across machines in a reproducible manner. In this hands-on lab, you will get up and running with Docker on Linux using an AWS virtual machine. You will work with images from the public Docker registry, run a handful of containers, and create your own image from which to create containers.
Query encrypted Amazon S3 Data with Amazon Athena

In this lab, you will use Amazon’s Athena, Structured Query Language (SQL), and Simple Storage Service (S3) to create an end-to-end data security model where data, communications, and query results are encrypted. Amazon Athena is an interactive query service that allows you to issue standard Structured Query Language (SQL) commands to analyze data on S3. Working directly in the AWS platform, you will learn how to encrypt query results and data on S3, how to perform basic queries in Athena, and more.
Using PowerShell DSC on Windows

Microsoft created PowerShell to help administrators automate admin tasks and configure systems. Built into PowerShell, Desired State Configuration (DSC) simplifies the configuration of servers for administrators. In this lab, you’ll work in a live Microsoft Azure environment to learn how to use PowerShell DSC. You will configure two Windows servers: a pull server hosting configuration files, and a node that pulls its configuration from the pull server.
Explore all of our newest content and see what’s coming up next on Cloud Academy in our content roadmap!

Google Cloud’s API knows the sort of thing you like to look at

The content below is taken from the original (Google Cloud’s API knows the sort of thing you like to look at), to continue reading please visit the site. Remember to respect the Author & Copyright.

Google Cloud’s Natural Language API has become a bit more, er, insightful: it can now sort content into 700 different categories, such as Health, Hobbies & Leisure and Law & Government.

This clustering allows businesses to tailor the info they deliver to customers to match those customers’ preferences. Google also added the ability to determine whether a specific entity in text is spoken about in a positive or negative light (previously the API could only find the sentiment in sentences or blocks of text).

While Google says text-clustering would help publishers such as Hearst “understand what their audience is reading and how their content is being used” it might not be all sunny skies. Researchers warn that it raises some privacy concerns.

“If I know the tweets and news and other texts you consume, and I cluster them, then I can very quickly determine your set of interests / sentiments / whatever clustering regime is applied,” Eduard Hovy, a natural language processing expert at Carnegie Mellon University in Pittsburgh, Pennsylvania, told The Register.

Tailoring content to preferences could help customers quickly find relevant info, he said, but “on the down side, this creates confirmation bubbles that lead you to believe that the whole world thinks like you do, and that any opposite views you might encounter are just random crazies out there, not the majority.”

He added on a more general level that there are a lot of things that can be deduced by clustering user preferences that could be used in unsavoury ways by businesses and governments or ransomed by hackers.

And although businesses might argue they’re targeting “classes” of people, not individuals, if enough clusters intersect and more data is extracted, “you can sometimes pinpoint an exact home,” he said.

“And further on the down side, [collating such data] means a hacker or a government that gets hold of it might know more about you than you’d like, potentially for blackmail, insurance denial, etc.

“The question is: do you trust Google? Or Amazon? Or whoever it is who knows what text/images/video you consume?” he said.

Google has not responded to a request for comment. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Obsolete Technology

The content below is taken from the original (Obsolete Technology), to continue reading please visit the site. Remember to respect the Author & Copyright.

And I can't believe some places still use fax machines. The electrical signals waste so much time going AROUND the Earth when neutrino beams can go straight through!

Do “Bad Work” When You’re in a Productivity Slump

The content below is taken from the original (Do “Bad Work” When You’re in a Productivity Slump), to continue reading please visit the site. Remember to respect the Author & Copyright.

Dealing with anxiety or depression is challenging enough as it is, but trying to be productive at the same time can feel downright impossible, like swimming against a current. There are some small productivity hacks that can help.

Over at Peaceful Dumpling, writer Cara Danielle Brown suggests giving yourself permission to do “bad work.” In other words, allow yourself to just start working, even if your work isn’t perfect— you can focus on perfecting it later.

“This may sound counter intuitive to anxiety-ridden perfectionists, but more times than not, it’s not actually the work that is stressful – it’s the anticipation of having to do the work and having to do it well. Occasionally, we need to override the system and give ourselves permission to suck if it means making it easier to put pen to paper. Once the bulk of the task is done, it is always easier to go back through and fix it at a later date. What’s more, sometimes we unlock our creativity when we stop censoring ourselves and focusing on doing a good job. I speak from personal experience when I say that some of my best work has been done completely on accident when I gave myself permission to do a terrible job.”

Of course, it depends on the type of work you do. Not every task will allow for fine-tuning, but this is good advice for getting started in general (it’s similar to author Anne Lamott’s concept of Shitty First Drafts), and it’s extra helpful when you’re dealing with depression and getting started is your biggest hurdle. Even the simplest task can seem overwhelming when you’re depressed. Personally, the Pomodoro technique has also helped me tremendously. Because you work in 25 minute increments, this method allows you to break up the task into smaller chunks, which helps you focus.

Beyond anxiety and depression, Brown’s concept works well if you just find yourself in a productivity slump or a rut. Again, getting started can feel like such a huge obstacle when you’re not motivated, and allowing yourself to do “bad work” that you’ll edit later at least helps you jump into it. She offers some additional tips at the link below.

How To Stay Productive When Anxiety & Depression Feel Utterly Paralyzing | Peaceful Dumpling

London Calling: The Hackaday UK Unconference Roundup

The content below is taken from the original (London Calling: The Hackaday UK Unconference Roundup), to continue reading please visit the site. Remember to respect the Author & Copyright.

A trip to London, for provincial Brits, is something of an undertaking from which you invariably emerge tired and slightly grimy following your encounter with the cramped mobile sauna of the Central Line, its meandering international sightseers, and stampede of besuited commuters heading for the City. Often your fatigue after such an expedition will be that following the completion of a Herculean labour, but just sometimes it will instead be the contented tiredness of a fulfilling and busy time well spent.

Such will be the state of the happy band of the Hackaday community who made it to London this weekend for our UK unconference held in association with our sponsor, DesignSpark. A Friday night bring-a-hack social in a comfortable Bloomsbury pub, followed by Saturday in an auditorium next to one of the former Surrey Commercial Docks for a day of back-to-back seven-minute talks laying out the varied and interesting work our readers are involved in.

“Varied and interesting” does not even begin to cover the breadth of projects, and expertise covered by you, our readers. It is a constant surprise and delight as a Hackaday editor to see new and interesting fields covered through Hackaday.io projects, and that diversity carried through into this event, with a continuous flow of speakers covering everything from digital privacy through laser-enhanced Nintendo zappers and robotic telepresence devices to stem cell research. We love our community!

The trouble is, with so many attendees and so many high-quality talks, where do we start in trying to describe them? It is probably best then to present an overview from a personal viewpoint, those talks that most stuck in my mind, and to present my apologies to those others who I simply don’t have the space in which to mention.

A super-sized Boldport PCB
A super-sized Boldport PCB

Perhaps the best place to start as a hardware enthusiast is with the three PCB-related speakers to whom I would have gladly listened to for far longer than the allotted seven minutes. [Saar Drimer] is a name some of you may recognise from his professional life as the Boldport electronic design agency, and he had brought along a variety of his more artistic work including a super-sized PCB for a museum display and a PCB trophy he’d created in the past for our sponsor. His talk though covered the ingenious design of his Monarch soldering kit, a PCB flashing-light butterfly.

[Roger Thornton] is someone whose work you may well be familiar with even if you do not immediately recognise his name. As the Raspberry Pi Foundation’s principal hardware engineer it is his hand you see on a Raspberry Pi board, and his talk gave us a unique insight into the design of the Raspberry Pi Zero. Fitting a wireless chipset onto an already tiny board while keeping components on only one side and costs to a minimum turns out to be a task fraught with difficulty.

[Mike]'s super-tiny electric stuff
[Mike]’s super-tiny electric stuff

Then to [Mike Harrison], who you may know as [Mike’s Electric Stuff] from his YouTube channel. His attention had been captured by a new line of tiny surface-mount white LEDs in a supplier catalogue, which had sent him down a flight of fancy into the world of tiny densely-packed PCB matrices of grain-of-dust lighting. This might sound like a straightforward piece of design work, but the density involved also necessitated a close-spaced grid of PCB vias, around which on the other side of the board he had to lay his drive circuitry. The results were both beautiful and bright little screens, as well as an increasingly intricate range of little boards.

Security and privacy talks also featured on the agenda, with [Joe Fitz] showing us why not even hadware-based two-factor authentication should be viewed as entirely trustworthy, with a beautifully-executed little wireless backdoor PCB for those RSA SecurID tokens. Then [Dana Polatin-Reuben], whose day job is with Privacy International, laid out some of the more chilling aspects of the ubiquitous data collection by manufacturers of IoT devices.

[Ales Eames] and his bicycle turn signal
[Alex Eames] and his bicycle turn signal

A couple of memorable talks centred around LED projects. [Alex Eames] of [Raspi.tv] fame showed us his LED bike lights, which of course are much more than mere lighting. [Alex]’s bike has indicators and a brake light, and because he abhors wiring and wants the convenience of removable bike lights, the front and rear units form the two halves of a wireless network. And then there was [Rachel Wong] talking about her wearable tech, though of course that was only half of what she covered. Her day job is as a stem cell scientist, of which cutting-edge work she gave us a brief flavour.

We had some robotics talks, [Libby Miller] demonstrated her [LibbyBot], an engaging telepresence bot created using an IKEA lamp, and [Neil Lambeth] gave us the low-down on robot football, with of course a supporting cast of robots.

As a Hackaday editor it was particularly good to meet some of the rest of the team, as we are spread across the globe. Our editor-in-chief [Mike Szczys] told us about his work on digital logic from first principles with discrete components, while managing editor [Elliot Williams] showed us his flip-dot display talk timer, unusually programmed in FORTH. Meanwhile our contributor-at-large [Alasdair Allan] delivered a cautionary tale on the dangers of trusting IoT data, and contributor [Adil Malik] showed us his rather beautiful three-phase power monitor. Then when it was my turn, for some light relief I eschewed hardware projects, and entertained the masses with a tale about cider.

Our venue was in quite a striking building
Our venue was in quite a striking building

Finally, a notable talk came from [James Larsson], who you may recognise as the originator of the Flashing Light Prize. He had an announcement, of the 2018 contest, which must involve neon lamps and a 1 Hz flash rate with a 50% duty cycle. Hackers, start your oscillators!

So the crowd that spilled out of the auditorium into the September night and made their way across to a nearby pub came away having had an edifying and entertaining day. There were people from all sides of our community present, people whose work we’d featured, and readers who had made the trek to London simply for the spectacle. We settled down for an evening of socialising over a pint or two of rather good craft ale, we Hackaday staffers having something of a need to relax after a day on our feet.

On behalf of Hackaday I’d like to extend our thanks to our sponsor DesignSpark, who made it possible to run a conference without charging for tickets, Canada Water Culture Space, whose staff provided the support that ensured everything ran smoothly, and finally to you, our readers and attendees. You make us who we are, and events like this one allow us to better remain in touch with you.

Our next global event will be the upcoming Hackaday Superconference in California in November. Our British readers can rest assured that this will not be the last time you will see us.

DesignSpark is the exclusive sponsor of the Hackaday UK Unconference.

Filed under: cons, Hackaday Columns

Office 365 Audit Logging Generates Lots of Data – and Some Odd Entries

The content below is taken from the original (Office 365 Audit Logging Generates Lots of Data – and Some Odd Entries), to continue reading please visit the site. Remember to respect the Author & Copyright.

A Single Audit Mart for Everything in Office 365

About two years ago, Microsoft set out to create the unified Office 365 audit data mart. The idea was simple. Instead of every application having its own way to generate and report audit events for user and administrator actions, the audit data mart would be a common repository for Office 365. Applications continue to generate events and the pipelines ingest those events into the data mart. At the same time, transformation occurs by applying common schema application-specific data so that the audit events have common fields.

Things started slowly. Exchange and SharePoint were the first applications to generate events. Azure Active Directory joined the party followed by other applications like Teams and even Sway. Microsoft sped up ingestion processing so that events now appear within an hour (and often sooner) rather than lagging by 24 hours as happened for some sources. In short, a great deal of work over two years constructed a robust audit recording system.

As good as the progress has been, some inconsistencies and challenges still lurk. Here are some of my observations.

Some Applications Are Chatty, Some Are Not

In the world of Office 365 audit records, some applications are chatty and generate many audit records and some are terse. SharePoint is the most granular of all applications and records multiple audit records for what you might think are relatively simple operations. Take the example of applying a classification label, updating the title, and editing some of the properties for a document in a SharePoint library. You might see the first six audit records listed in Table 1 and the seventh if OneDrive for Business synchronizes the document library.

Operation Item Reason
Accessed file Dispform.aspx Display document properties
Accessed file Editform.aspx Edit document properties
Accessed file Upload.aspx Upload changes
ComplianceSettingChanged Document URL Apply label to document
Modified file Document name Update document with new title.
FileModifiedExtended Document URL Update file properties.
Downloaded files to computer Document name OneDrive for Business synchronization

Table 1: SharePoint audit records for changes to a document

It is reasonable that SharePoint is careful about recording what users do to documents. After all, many tenants use SharePoint as a document management system and want to know who did what and when. The results for other applications are less granular. Teams, for instance, records when users connect to the service, update settings, or add or change channels, but not much else. Planner does not support auditing and I have seen no trace of Yammer audit events in the log. Exchange does generate both mailbox and administrative audit events, but you must configure mailbox auditing to capture this information.

On the upside, good auditing exists for eDiscovery operations generated within the Security and Compliance Center. All this proves that Office 365 auditing is still a work in progress.

Limited Availability of Audit Data

Operating at the scale of Office 365 means the capture of truly massive amounts of audit records daily. To limit the resources needed for the audit data mart, Microsoft only keeps audit data for 90 days. This is not a hard limit and data might exist for 92 days or 93 days, but sometime soon afterwards the data are gone.

Sponsored

Obviously, some tenants want to be able to access audit data for longer. Microsoft’s solution is Office 365 Advanced Security Management (ASM), part of the E5 plan or available as a monthly add-on. ASM ingests audit day into its own store and keeps it for 180 days. You must license every user whose data you want ASM to manage, so at $3/user per month the cost can be considerable. On the upside, ASM does a lot more than audit management and you should evaluate ASM on that basis.

Cogmotive Discover and Audit is much cheaper than ASM and keeps audit data for much longer. Although it does not have the same kind of intelligence built into ASM to detect and highlight security anomalies, the Cogmotive product delivers very flexible search and reporting capabilities for Office 365 audit events.

Extracting to Another System

Because Office 365 keeps audit records for a limited time, some prefer to move Office 365 audit data to another system. The process to extract and format information can be challenging. A busy Office 365 tenant can generate many audit entries, especially if people make heavy use of SharePoint and OneDrive for Business. A good rule of thumb is to expect 200 audit records per day per active user. The Search-UnifiedAuditLog cmdlet (part of the Exchange Online cmdlet set) can find the records, but then you must extract the necessary information and pass it to the target system (here is a good example of the process used in one situation).

Some Odd Audit Entries

As you might expect, auditing captures both user-initiated and system-initiated events. My difficulty here is that Microsoft explains the system-initiated events poorly. For example, what does the “app@sharepoint” job do precisely? It is certainly a busy beast and accesses documents in different sites (Figure 1). It seems like this might be the user name assigned to Search Foundation activities because every time someone updates a file, an audit record for app@sharepoint appears soon afterwards, perhaps when the crawler reprocesses the file.

Office 365 Audit Record 1

Figure 1: app@sharepoint accesses a file (image credit: Tony Redmond)

app@sharepoint also turns up elsewhere, notably after the membership of an Office 365 group changes and the change replicates to the group for the SharePoint team site.

Figure 2 shows another strange audit entry. In this case, a system mailbox (presumably in Exchange Online) generates a new synchronization request audit record. Why?

Office 365 Audit Record 2

Figure 2: A system mailbox makes a sync request (image credit: Tony Redmond)

Last, Figure 3 shows an audit record that I can explain. This is an example of an Exchange administrative audit event created by an Exchange Online background process to update the transport settings for the tenant.

Office 365 Audit Record 3

Figure 3: Exchange Online updates the tenant (image credit: Tony Redmond)

All these audit entries are valid and I am sure that all the actions they capture are essential. However, that is not the problem. The issue is that tenant administrators do not know that these are system events. They can guess and experienced administrators are likely to work out what events cause the capture of these events, but it would be much simpler if Microsoft labelled them as “system events.”

Connector Events

Another oddity is when you see a batch of audit events recording when someone sends a message using the SendAs permission. Usually this happens when a delegate sends a message from another user’s mailbox. But if you configure an Office 365 connector to direct a Twitter feed into a group, you will see a blizzard of audit events. It is as if someone hyper actively used their delegate permission to spam the world. In fact, Exchange logs an audit event for each tweet downloaded through the connector (Figure 4).

Audit Event SendAs

Figure 4: A SendAs audit event for a connector

These audit events are worse than useless when it comes to tracking potential misuse of delegate permissions and are probably caused by a bug in the connector.

Sponsored

More Auditing to Come

Office 365 expands all the time and more applications generate audit events. That means that understanding the events captured in the audit log becomes harder because of the quantity and variety of data. Microsoft has done a good job to corral some cats to create a common audit structure for Office 365. The trick now is to make everything even smoother.

If you want more information about Office 365 audit events and you are at the Ignite conference next week, come along to the “Decoding audit events in Microsoft Office 365” session (1:15pm in Expo Theatre #5, Monday, Sept 25). Fellow MVP Alan Byrne and I will try to throw some light onto this topic.

Follow Tony on Twitter @12Knocksinna.

Sponsored

Want to know more about how to manage Office 365? Find what you need to know in “Office 365 for IT Pros”, the most comprehensive eBook covering all aspects of Office 365. Available in PDF and EPUB formats (suitable for iBooks) or for Amazon Kindle.

The post Office 365 Audit Logging Generates Lots of Data – and Some Odd Entries appeared first on Petri.

How to Prepare to Work From Home

The content below is taken from the original (How to Prepare to Work From Home), to continue reading please visit the site. Remember to respect the Author & Copyright.

The day before you work from home, remember to transfer any important files, as Fast Company points out in their guide to working from home. If you’re using a different computer, sync everything over with Dropbox, email, or a USB drive. Even if you’re using the same computer, or if you mostly rely on cloud services,…

Read more…

3D Prints That Fold Themselves

The content below is taken from the original (3D Prints That Fold Themselves), to continue reading please visit the site. Remember to respect the Author & Copyright.

3D Prints That Fold Themselves

3D printing technologies have come a long way, not only in terms of machine construction and affordability but also in the availability of the diverse range of different printing materials at our disposal. The common consumer might already be familiar with the usual PLA, ABS but there are other more exotic offerings such as PVA based dissolvable filaments and even carbon fiber and wood infused materials. Researchers at MIT allude to yet another possibility in a paper titled “3D-Printed Self-Folding Electronics” also dubbed the “Peel and Go” material.

The crux of the publication is the ability to print structures that are ultimately intended to be intricately folded, in a more convenient planar arrangement. As the material is taken off the build platform it immediately starts to morph into the intended shape. The key to this behavior is the use of a special polymer as a filler for joint-like structures, made out of more traditional but flexible filament. This special polymer, rather atypically, expands after printing serving almost like a muscle to contort the printed joint.

Existing filaments that can achieve similar results, albeit after some manual post-processing such as immersion in water or exposure to heat are not ideal for electronic circuits. The researchers focus on this new materials potential use in manufacturing electronic circuits and sensors for the ever miniaturizing consumer electronics.

If you want to experiment printing extremely intricate structures, check out how [_primoz_] brilliant technique revolutionized how the 3D printing community prints thin fibers, bristles, and lion sculptures.

VIDEO

Posted in 3d Printer hacksTagged , ,

UK police are using detection dogs that can sniff out USB drives

The content below is taken from the original (UK police are using detection dogs that can sniff out USB drives), to continue reading please visit the site. Remember to respect the Author & Copyright.

Devon & Cornwall and Dorset Police have begun utilising special FBI-trained sniffer dogs that have been specifically trained to detect hidden storage devices. Police dogs Tweed, a 19-month-old springer spaniel, and Rob, a 20-month-old black Labrador, are the first dogs outside of the US that will help track "terrorists, paedophiles and fraudsters" by tracing the unique chemicals found in hard drives, USB sticks and SD cards.

The dogs have been busy. While executing one warrant, Tweed sniffed out what looked like a Coke can. On closer inspection, officers found that the can was instead a can-shaped money box, which had been used to hide a number of SD cards. In a separate search, Rob was able to track down a device hidden carefully in a draw that would likely have been missed during a visual search. The duo have already been used in a over 50 warrants across Britain, including Hampshire, Essex, South Wales and North Yorkshire.

So-called Digital Storage Detection Police Dogs are widely used in the US, with both the Connecticut State police and the FBI employing their own device-seeking canines. A collaboration was formed between Devon & Cornwall and Dorset Police and the US authorities in December 2016, allowing Police Constable Graham Attwood and his team to begin working on training Tweed and Rob, which were specifically brought in for the programme.

"Myself and members of the alliance dog school, initially handled and trained Tweed and Rob, mainly in our own time, as we were committed to our usual daily duties of training the forces other operational police dogs," PC Attwood said. "The majority of the dogs we have in the force either come from our puppy breeding scheme or are gift or rescue dogs, but this was a unique challenge for us as so we identified and purchased Tweed and Rob last December when they were around 15 months old, and embarked on this journey with them."

Devon & Cornwall and Dorset Police aren’t shy when it comes to tech. They were one of the first units to deploy drones to help with crime scene photography and missing people searches. That trial has since been extended, with Devon police recently setting up the UK’s first 24/7 drone squad. Rob and Tweed are currently part of a pilot programme, but the force says it will assess their performance before the end of the year with a view to rolling it out wider.

Via: The Guardian

Source: Devon & Cornwall Police