11 signs your kid is hacking — and what to do about it

The content below is taken from the original (11 signs your kid is hacking — and what to do about it), to continue reading please visit the site. Remember to respect the Author & Copyright.

I’ve shared a lot of security knowledge in my tenure as InfoWorld’s Security Advisor. But what I’ve never shared before is that much of my initial computer security defense knowledge, which I turned into my first book, came from trying to stop my teenage stepson from being a malicious hacker.

I was newly dating his mother and he was a precocious 15-year-old who liked messing around with electronics and computers. He and his closest friends also flirted with malicious hacking, including harassing “ignorant” users, DoS-ing popular computer networks, making malware, and all sorts of unquestionably illegal and unethical hacking behavior.

His neighborhood computer hacking club eventually suffered a big takedown by the authorities. Luckily for him, and us, he had dropped out of illegal hacking activity a year before — but not before he fought against me and his mom’s rules and disguised his continuing hacking activities for many months. It was a daily (and nightly) battle of my latest defense against his new workaround. His mom and I even found previously unknown network cabling run through the attic and several hidden servers, proxy servers, and VPN switches. I learned a lot about hacking by trying to defeat his methods, and he learned that new potential stepdads trying to impress his mother were just as persistent — and at times smarter.

His mom and I recently celebrated 16 years of marriage, and we’re a happy family. In the years since fighting my stepson, I have detected many teenage hackers and have been asked by readers to counsel their hacking kids. No doubt a fairly substantial percentage of teenagers are maliciously hacking on a daily basis under the radar of their parents, who usually think their children are simply exploring what their computers can do and innocently conversing with their computer friends.

Hacking can provide a new world of acceptance and empowerment, especially for smart teenagers who are not doing all that well in school, are bored, or are getting harassed by other teens or by their parents because they “aren’t working to their full potential.” In the hacking world, they can gain the admiration of their peers and be mini-cyber rock stars. It’s like a drug for them, and a good percentage can turn permanently to the dark side if not appropriately guided.

The following signs can help you ascertain whether a young person in your life is involved in unethical, illegal hacking. Some of the signs may be typical teenage behavior, given their grave interest in privacy, but enough of these signs together can point toward something more problematic. If you do find suspicious malicious activity, rest assured that you can turn a young hacker onto using their hacking skills for ethical, positive purposes, as I outline below.

1. They flat out tell you (or brag about how easy it is to hack)

It may be hard to believe, but many parents hear their children make direct claims about their hacking activity, often multiple times, and blow it off. They either don’t know what “hacking” means, or they assume good little Johnny isn’t doing anything stupid. Well, they might be.

Most hacking is easy: You read a hack how-to and then do it. Often it’s as easy as downloading a tool and pushing the GO button. On TV, hackers are always portrayed as masterminds. In reality, they’re usually more ordinary than genius. They read and learn. Persistence is their most outstanding trait.

Kids who get into malicious hacking often feel guilty about crossing the ethical line early on. Telling close friends and even their parents about their newly gained skills can be a way of reaching out and communicating that sense of guilt. Though most don’t realize it, they often want their parents to offer guidance at this critical junction. Sadly, most parents and friends who hear these claims and confessions don’t know what to make of them, leaving their child or friend to sort out the conflict on their own. The results aren’t always for the best.

2. They seem to know a little too much about you

Kids who hack often start with those closest to them: Their parents. If your child seems to know something they could know only by reading your email or other online activities, your radar should be up.

It’s not uncommon for hacking kids to monitor their parents’ online activities, usually in hopes of capturing admin passwords or to learn how to turn off any anti-hacking devices, such as firewalls and parental controls, that you may have set up. (And you thought the monitoring was the other way around.) But then curiosity gets the best of them and they end up reading their parents’ emails or social media chats.

I’ve had more than one parent tell me they couldn’t figure out how their kids were getting around parental blocks, until they looked into the logs and saw that their parental blocks were being disabled and re-enabled frequently. Or their child made a snide remark or alluded to something they could have known only by reading a parent’s confidential communications. If your hacking kids seem to know more about you than you’ve shared, it’s a sign. Pay attention.

3. Their (technical) secrecy is off the charts

Every teenager wants 100 percent confidentiality on their online activities, regardless of whether they are hacking. But sophisticated protection, including encryption of all communications, files, folders, chats, and applications, may be a sign there’s something else going on besides garden-variety teen secrecy.

The tip-off? If you get on your child’s computer and can’t see any of their activity. If they always clear their log files and browser history, every time, and use special programs to encrypt files and folders, that’s a possible sign. Or if encryption settings on their applications are set to a level stronger than the program’s defaults. Any indication that they feel the built-in disk encryption and separate user profile protections aren’t enough should have you asking, for what kind of activity?

4. They have multiple accounts you can’t access

Many kids have multiple email and social media accounts. That’s normal. But if your child has a main email and social media account they don’t mind you reading and you come across signs that they have other accounts and log-ons they will not share, make a note of it. It may not be malicious hacking; it could be porn or some other activity you would not approve of (talking to strange adults, buying alcohol, purchasing weapons, etc.). But any sort of absolute privacy should be investigated.

My stepson and his hacking friends had a half-dozen account names. I could see them when I read through the firewall and packet filtering logs. I knew he had them, even when he was denying it. He was surprised to learn that PGP (Pretty Good Privacy) encryption didn’t encrypt the whole email. I explained how all email encryption had to allow the email headers to remain in the clear so they could be appropriately routed and handled. After that conversation, all the “secret” accounts disappeared from my future log captures. He didn’t stop using them; he just downloaded a new email encryption program, which did perform complete, end-to-end encryption. (Refer to the previous sign about encryption, above.)

5. You find hacking tools on their computer

If you suspect your kid is hacking, take inventory of all the programs and tools you can find on their system. If your kid doesn’t think you’ll do it or doesn’t know you’ve done it, you might get lucky and they might not be encrypted — yet. In fact, if you find lots of encrypted files and programs, that’s a red flag, too.

Port scanners, vulnerability scanners, credential theft programs, denial-of-service tools, folders of stored malware — these are strong signs your kid is hacking. If you’re not computer-savvy enough to recognize these tools, note the file names and search the internet. If more than one of the unknown programs points back to a hacker (or a computer security defender) website, you probably have a problem.

Why are tools to help defend against hackers a red flag? Isn’t that a sign your child wants to become a high-paid computer security consultant when they grow up? Sadly, not usually. I’ve yet to meet the kid who decided to become a computer security expert before college, unless they’d been defending themselves against other aggressive hackers as a teen.

Young hackers usually end up getting hacked by others, either from their own hacking groups or other hacking groups. Once they’ve been actively targeted and broken into once or twice, they will often concentrate on their own defenses. You’ll see firewalls they’ve downloaded and configured (the built-in ones aren’t enough in their eyes) and proxies (to hide their IP address or ports), and they will be scanning all the computers in the house for vulnerabilities, which they will admonish you to fix.

My stepson even let us know he had called the cable company and gotten us a new IP address. When I asked why, he told me that hackers were attacking us. I wondered why that might be, but then again the firewall was always showing hundreds to thousands of unauthorized probes and packets every day anyway. What I didn’t know was that he was engaged in an all-out cyberwar with a competing hacking group.

Supermicro’s macro Microblade: That chassis is… huge

The content below is taken from the original (Supermicro’s macro Microblade: That chassis is… huge), to continue reading please visit the site. Remember to respect the Author & Copyright.

Review Supermicro has a neat new product it calls “Microblades”. Supermicro has made blade servers for some time, and Microblades are blade servers, but smaller. Supermicro sent a chassis and a pair of blades over for review.

Each vendor has its own approach to server management, be that blade management or baseband management controllers, and Supermicro has chosen an interesting balance between features and cost. Supermicro has always focused on affordability by cutting back on what it sees as niche features while still maintaining the most in demand feature sets.

For baseband management controllers, this has worked to Supermicro’s advantage. If you buy a Supermicro server (or blade) with a baseband management controller, you’ll get an IPMI and Redfish compatible server management plane that doesn’t require you to pay extra for an IPKVM licence.

While the IPKVM licence to allow remote control of a server through the baseband management controller might not be a big expense on a 4 CPU server with 3TB of RAM, they get noticeable at the small end. Think about having to pay $500 to enable IPKVM functionality in iLO on your $500 HP Microserver. (I’m not bitter, really.)

The Supermicro approach to blades

Supermicro has taken a similar approach to the feature capabilities of its blades. The result is remarkable affordability. The flip side is that one of the major use cases for blades – the automation and centralisation of management – isn’t possible with software from Supermicro itself.

A Supermicro blade enclosure is really not that different from anyone else’s. It’s a box into which you place blade servers. On the other end of the unit are power supplies, and the ability to connect one or more management and switch modules, depending on the size of the chassis you purchased.

In the case of the Microblade chassis you can have three basic configurations. The first is a 3U chassis that can support up to 14 blades, with one management module and two switch modules. The second configuration is a 6U chassis that can support one management module and up to 4 switch modules. The last is a 6U chassis that can support up to two management modules and up to two switch modules.

Supermicro MicroBlade CMM with switch module

Microblade CMM UI looks a lot like the standard SMCI UI

The management module offers control over the whole of the chassis through a web interface. From here one can control basic elements of the various connected devices. This includes powering on or off, updating firmware, remote controlling and monitoring.

One of the little things that really tickled me was the awareness of power that the management module had. It is aware of how much power devices can consume and, for example, refused to let me power up both blades until I had plugged in at least two of the power supplies. Supermicro’s blades also have the ability to set power policy if too many power supplies fail. You can power off certain blades or throttle most of them. You can even set a power cap on a per-blade basis.

To me, this is pretty neat stuff, but it’s all pretty basic as far as blades go. (Which just goes to show how often I work with blade chassis.) Supermicro’s offering here are where Dell, HP and Cisco were several years ago, and for most use cases that’s perfectly fine.

What Supermicro doesn’t offer is a multi-chassis management software solution. Each chassis is its own universe in Supermicro’s world. The lack of that management solution also means there is Supermicro software for provisioning blades. If you want to load up operating systems or configure the switches you have to do that manually or through third party software.

This lack of management software won’t win a lot of friends in the hard-boiled legacy infrastructure community, but it’s not really much of an impediment to the hyperscale types. They’re all about provisioning through PXE booting and are used to working with “dumb nodes”.

For hyperscalers IPKVM capability combined with the chassis-based power management and monitoring is already a cut above what they’re used to. Normally, hyperscalers won’t want to pay for the extra gubbins. They won’t be buying Supermicro’s traditional blade solutions.

But they just might buy these Microblades.

A dense mass

For reasons I will never fully understand, Supermicro sent me a 6U chassis to test two blades. This gave us lots of opportunity to appreciate just how damned huge this Microblade chassis is. And it’s huge. It’s bigger than huge.

At 265mm x 449mm x 875mm (10.43″ x 17.67″ x 34.45″), it may be the single largest thing you can put onto a rack, and it probably isn’t going to fit onto most racks. The chassis alone, with just power supplies and no blades, is over 90kg (200lbs).

Moving a Supermicro server is a two-man job unless you employ Hodor, or if your organisation takes the term “forklift upgrade” literally.

And then you add blades. The blades themselves are compact at 1.2″ x 4.94″ x 23.2″ (30.48mm x 125.48mm x 589.28mm). Up to 28 blades can be put into a 6U chassis.

The blades come in flavours. One variant crams 4x Intel Avoton Atom nodes into a single blade. Each Avoton node can have 32GB of RAM each and 1x 2.5″ drives per node. That’s 112 Avoton nodes (896 cores) crammed into 6U.

Supermicro MicroBlade CMM with iKVM open

MicroBlade UI with iKVM open, screenshot by Supermicro to show how dual-node blades are represented.

The two blades Supermicro sent me were 2x Xeon-based blades, one with Xeon v4 chips and one with Xeon v3 chips. These blades will support Xeon E5 2695 CPUs with have 18 cores (36 threads) per chip.

28 blades each with 2 Xeon E5 2695 CPUs, 256GB of RAM and 2x 4TB 2.5″ SSDs would provide 2016 logical cores, 7,168TB of RAM and 448TB of storage in 6U. I don’t want to know the cost because I’d bet it rivals my house.

I remember when that kind of compute would have taken half a continent. Now it fits under my testlab table.

Practical considerations

Other than the size of the Microblade chassis (did I mention it’s enormous and heavy?) I was able to glean a few useful impressions from my time reviewing the device. Perhaps the most important thing to remember is that the switch modules don’t automatically power on when you plug in the device.

I realize it is probably revealing of my personal idiocy, but I banged away at that thing for a day before I realised the reason I couldn’t see any of the nodes was that the dang switch wasn’t turned on. The command module doesn’t use the switch. It has its own NIC. So you can cheerfully log into the management UI and even remote control blades all without the switch being turned on.

Without the switch turned on, the unit isn’t all that loud. No more than my 10GbE switch, for example. When we turned on the switch module, however, the unit was so loud that the lab’s cat jumped three feet straight into the air, kicked in the warp drive, and disappeared in a howl of protest.

Do not engage Microblade chassis switch without hearing protection. Do not do.

Swapping blades around is pretty easy, and the unit seems to remember settings for them. If I configured power thresholds for a blade they seemed to bind to the blade, not the slot in the chassis, so when I started moving the blades about the settings followed.

The chassis seems to be perfectly okay with you playing musical power supplies while the thing is lit up. I didn’t have redundant command modules or switches, so I can’t tell you anything about how it deals with changing those out.

Parting thoughts

Overall, my impression of the Microblade solution is good. To be honest, I’d like to see a multi-chassis management platform with some way to provision blades in a reasonably automated fashion. Given how Supermicro works, that will likely come from a third party, but perhaps Supermicro should get busy on partnering and listing those partners on its blade chassis webpages.

Other than that niggle, however, it does what it says on the (very large) tin. If you want to cram a whole lot of oomph into a very small space, this will do. It doesn’t quite have the density of HP’s Moonshot chassis, but it has more flexibility as regards the type of blades.

In all, it’s very Supermicro. It is just that little bit more than good enough and it’s up to you fill in the rest. And with that, I must be off: I really should go find the cat. ®

Sponsored:
Best practices for writing a successful NSF MRI grant proposal

AutoArduino Lets You Control Your Arduino Projects From Tasker

The content below is taken from the original (AutoArduino Lets You Control Your Arduino Projects From Tasker), to continue reading please visit the site. Remember to respect the Author & Copyright.

Android: As if Tasker’s plugins weren’t powerful enough, the developer behind popular plugins like AutoVoice and AutoInput has released a new plugin that lets you control an Arduino from Tasker.

AutoArduino, which finally left beta this week, can control your Arduino board via USB OTB, Bluetooth, or Ethernet. After only a few weeks of beta testing, the plugin has already been used to control a sprinkler system, and possess a Furby. Being able to connect an Arduino to your phone gives your projects access to a ton of new sensors and information, so if you’re into electronics hacking, this should be a fun new frontier for you.

AutoArduino | Joaoapps via Android Police

Free Online Training to Help You Learn AWS Security Fundamentals

The content below is taken from the original (Free Online Training to Help You Learn AWS Security Fundamentals), to continue reading please visit the site. Remember to respect the Author & Copyright.

Free Online Training to Help You Learn AWS Security Fundamentals

by Jeff Barr | on | in Training and Certification | Permalink | Comments

My colleague Janna Pellegrino shared the guest post below to introduce you to the newest version of our AWS Security Fundamentals Course.

Jeff;


Information security is deeply important to our customers. AWS Security Fundamentals is a free, online course that introduces you to fundamental cloud computing and AWS security concepts, including AWS access control and management, governance, logging, and encryption methods. It also addresses security-related compliance protocols, risk management strategies, and procedures for auditing AWS security infrastructure.

We have significantly updated this course to help our customers. Updates include:

  • New content on AWS security services related to encryption, network security, access control and management, and reporting of user access to AWS services.
  • Updated information about the AWS Shared Responsibility Model.
  • Demos to teach you how to create encrypted root volumes, configure AWS Web Application Firewall (WAF), and create and run AWS Config Rules to evaluate your AWS environment for compliance.
  • More robust content around AWS compliance and assurance programs, and AWS services that help enforce governance, compliance, and risk management.
  • A short quiz to assess your knowledge on AWS security concepts and services.

This four-hour, self-paced course is aimed at IT business or security professionals interested in cloud security practices and AWS, as well as IT auditors, analysts, and regulators. It is also a recommended prerequisite for our 3-day Security Operations on AWS course.

You can learn about this course and others training resources at AWS Training.

Janna Pellegrino, AWS Training and Certification

 

Data Center SDN: Comparing VMware NSX, Cisco ACI, and Open SDN Options

The content below is taken from the original (Data Center SDN: Comparing VMware NSX, Cisco ACI, and Open SDN Options), to continue reading please visit the site. Remember to respect the Author & Copyright.

The data center network layer is the engine that manages some of the most important business data points you have. Applications, users, specific services, and even entire business segments are all tied to network capabilities and delivery architectures. And with all the growth around cloud, virtualization, and the digital workspace, the network layer has become even more imporant.

Most of all, we’re seeing more intelligence and integration taking place at the network layer. The biggest evolution in networking includes integration with other services, the integration of cloud, and network virtualization. Let’s pause there and take a brief look at that last concept.

Software-defined networking, or the abstraction of the control and data plane, gives administrators a completely new way to manage critical networking resources. For a more in-depth explanation of SDN, see one of my recent Data Center Knowledge articles.

There are big business initiatives supporting the technology. Very recently, IDC said that the worldwide SDN market, comprising physical network infrastructure, virtualization/control software, SDN applications (including network and security services), and professional services, will have a compound annual growth rate of 53.9% from 2014 to 2020 and will be worth nearly $12.5 billion in 2020.

As IDC points out, although SDN initially found favor in hyperscale data centers or large-scale cloud service providers, it is winning adoption in a growing number of enterprise data centers across a broad range of vertical markets, especially for public and private cloud rollouts.

“Large enterprises are now realizing the value of SDN in the data center, but ultimately, they will also recognize its applicability across the WAN to branch offices and to the campus network,” said Rohit Mehra, VP, Network Infrastructure, at IDC.

“While networking hardware will continue to hold a prominent place in network infrastructure, SDN is indicative of a long-term value migration from hardware to software in the networking industry. For vendors, this will portend a shift to software- and service-based business models, and for enterprise customers, it will mean a move toward a more collaborative approach to IT and a more business-oriented understanding of how the network enables application delivery,” said Brad Casemore, Director of Research for Data Center Networking at IDC.

There are several vendors offering a variety of flavors of SDN and network virtualization, so how are they different? Are some more open than others? Here’s a look at some of the key players in this space.

VMware NSX. VMware already virtualizes your servers, so why not virtualize the network too? NSX integrates security, management, functionality, VM control, and a host of other network functions directly into your hypervisor. From there, you can create an entire networking architecture from your hypervisor. This includes L2, L3, and even L4-7 networking services. You can even create full distributed logical architectures spanning L2-L7 services. These services can then be provisioned programmatically as VMs are deployed and as services are required within those VMs. The goal of NSX is to decouple the network from the underlying hardware and point completely optimized networking services to the VM. From there, micro-segmentation becomes a reality, increased application continuity, and even integration with more security services.

  • Use cases and limitations. The only way you can truly leverage NSX is if you’re running the VMware hypervisor. From there, you can control East-West routing, the automation of virtual networks, routing/bridging services for VMs, and other core networking functions. If you’re a VMware shop hosting a large number of VMs and are caught up in the complexities of virtual network management, you absolutely need to look at NSX. However, there are some limitations. First of all, your levels of automation are limited to virtual networks and virtual machines. There’s no automation for physical switches. Furthermore, some of the L4-L7 advanced network services are delivered through a closed API, and might require additional licensing. Ultimately, if you’re focused on virtualization and your infrastructure of choice revolves around VMware, NSX may be a great option. With that in mind, here are two more points to be aware of: If you have a super simple VMware deployment with little complexity, you’ll probably have little need for NSX. However, if you have a sizeable VM architecture with a lot of VMware networking management points, NSX can make your life a lot easier.

Big Switch Networks. Welcome to the realm of open SDN. These types of architectures provide for more options and even support white (brite) box solutions. Big Switch has a product called Big Cloud Fabric, which it built using open networking (white box or brite box) switches and SDN controller technology. Big Cloud Fabric is designed to meet the requirements of physical, virtual, cloud and/or containerized workloads. That last part is important. Big Switch is one of the first SDN vendors out there to specifically design networking services for containerized microservices. Here’s the other cool part: BCF supports multiple hypervisor environments, including VMware vSphere, Microsoft Hyper-V, KVM, and Citrix XenServer. Within a fabric, both virtualized servers and physical servers can be attached for complete workload flexibility. For cloud environments, BCF continues OpenStack support for Red Hat and Mirantis distributions. The other cool part is your ability to integrate it all with Dell Open Networking switches.

  • Use cases and limitations. Even though it will support other hypervisors, the biggest benefits come from the integration with VMware’s NSX. BCF interoperates with the NSX controller providing enhanced physical network visibility to VMware network administrators. Furthermore, you can leverage the full power of your white (brite) box switches and extend those services throughout your virtualization ecosystem and the cloud via OpenStack. That being said, it’s important to understand where this technology can and should be deployed. If you’re a service provider, cloud host, or a massively distributed organization with complex networks, working with a new kind of open SDN technology could make sense. First of all, you can invest and have confidence around commodity switches since the software controlling it is powerful. Secondly, you’re not locked down by any vendor, and your entire networking control layer is extremely agile. However, it won’t be a perfect fit for everybody. Arguably, you can create a “one throat to choke” architecture here; but it won’t be quiet as clean as buying from a single networking vendor. You are potentially trading off open vs proprietary technologies, but you need to ask yourself: “What’s best for my business and for my network?” If you’re an organization focused on growth, your business, and your users, and you simply don’t have time or want to work with open SDN technologies, this may not be the platform for you. There will be a bit of a learning curve as you step away from traditional networking solutions.

Cumulus Linux. This has been an amazing technology to follow and watch gain traction. (Please note that there are many SDN vendors creating next-generation networking capabilities built around open and proprietary technologies. Cumulus Linux is included here as an example and to show just how far SDN systems have come.) The architecture is built around native Linux networking, giving you the full range of networking and software capabilities available in Debian, but supercharged … of course! Switches running Cumulus Linux provide standard networking functions such as bridging, routing, VLANs, MLAGs, IPv4/IPv6, OSPF/BGP, access control, VRF, and VxLAN overlays. But here’s the cool part: Cumulus can run on “bare-metal” network hardware from vendors like Quanta, Accton, and Agema. Customers can purchase hardware at cost far lower than incumbents. Furthermore, hardware running Cumulus Linux can run right alongside existing systems, because it uses industry standard switching and routing protocols. Hardware vendors like Quanta are now making a direct impact around the commodity hardware conversation. Why? They can provide vanity-free servers with networking options capable of supporting a much more commoditized data center architecture.

  • Use-cases and limitations. Today, the technology supports Dell, Mellanox, Penguin, Supermicro, EdgeCore, and even some Hewlett Packard Enterprise switches. Acting as an integration point or overlay, Cumulus gives organizations the ability to work with a powerful Linux-driven SDN architecture. There are a lot of places where this technology can make sense. Integration into heavily virtualized systems (VMware), expansion into cloud environments (direct integration with OpenStack), controlling big data (zero-touch networking provisioning for Hadoop environments), and a lot more. However, you absolutely need to be ready to take on this type of architecture. Get your support in order, make sure you have partners and professionals who can help you out, and ensure your business is ready to go this route. Although there are some deployments of Cumulus in the market, enterprises aren’t ripping out their current networking infrastructure to go completely open-source and commodity. However, there is traction with more Linux workloads being deployed, more cloud services being utilized, and more open sources technologies being implemented.

Cisco Application Centric Infrastructure (ACI). At a very high-level, ACI creates tight integration between physical and virtual elements. It uses a common policy-based operating model across ACI-ready network and security elements. Centralized management is done by the Cisco application policy infrastructure controller, or APIC. It exposes a northbound API through XML and JSON and provides a command-line interface and GUI that use this API to manage the fabric. From there, network policies and logical topologies, which traditionally have dictated application design, are instead applied based on the application needs.

  • Use-cases and limitations. This is a truly powerful model capable of abstracting the networking layer and integrating core services with your important applications and resources. With this kind of architecture, you can create full automation of all virtual and physical network parameters through a single API. Furthermore, you can integrate with legacy workloads and networks to control that traffic as well. And yes, you can even connect non-Cisco physical switches to get information, on the actual device and what it’s working with. Furthermore, partnerships with other vendors allow for complete integrations. That said, there are some limitations. Obviously, the only way to get the full benefits from Cisco’s SDN solution is by working with the (sometimes not entirely inexpensive) Nexus line of switches. Furthermore, more functionality is enabled if you’re running the entire Cisco fabric in your data center. For some organizations, this can get expensive. However, if you’re leveraging Cisco technologies already and haven’t looked into ACI and the APIC architecture, you should.

See also: Why Cisco is Warming to Non-ACI Data Center SDN

As I mentioned earlier, there are a lot of other SDN vendors that I didn’t get the chance to discuss. Specifically:

  • Plexxi
  • Pica8
  • PLUMgrid
  • Embrane
  • Pluribus Networks
  • Anuta
  • And several others…

It’s clear that SDN is growing in importance as organizations continue to work with expanding networks and increasing complexity. The bottom line is this: There are evolving market trends and technologies that can deliver SDN and fit with your specific use case. It might simply make sense for you to work with more proprietary technologies when designing your solution. In other cases, deploying open SDN systems helps further your business and your use-cases. Whichever way you go, always design around support your business and the user experience. Remember, all of these technologies are here to simplify your network, not make it more complex.

Azure HDInsight application platform: Install solutions built for the Apache Hadoop ecosystem

The content below is taken from the original (Azure HDInsight application platform: Install solutions built for the Apache Hadoop ecosystem), to continue reading please visit the site. Remember to respect the Author & Copyright.

Azure HDInsight is an Apache Hadoop distribution powered by the cloud. This means that it handles any amount of data, scaling from terabytes to petabytes on demand. Spin up any number of nodes at any time – we charge only for the compute and storage that you use.

We are pleased to announce an easy way to distribute, discover and install solutions or applications that you have built for the Apache Hadoop ecosystem. These solutions can span a variety of scenarios from data ingestion, data wrangling, monitoring, visualization, optimizing performance, security, analyzing, visualization, reporting, and many more.

The following are some key highlights of this experience:

Ease of authoring, deploying and management

  • You can install these apps on an existing HDInsight cluster as well as while creating new clusters. 

Native access to cluster

  • The apps are installed on the Edge node and have access to the entire cluster.

Ease of configuring the app on the cluster

  • End users of these applications do not have to install or manage packages on each and every node and configure the application.

Install solutions on existing HDInsight clusters.

  • Solution providers can make their solutions available to users who already have an HDInsight cluster running. This allows users to use these solutions easily and increase their productivity.

As a user, you can select the Applications blade to discover and install apps on your cluster.

In the following screenshot, I had an existing cluster and I can discover the Datameer application. I can then click on the Datameer application for fast, easy installation on my existing cluster.

Once I’ve installed Datameer on my cluster, I can now see it on my list of Installed Apps.

With Datameer installed, I can now access the Datameer Studio from the Installed Apps blade to let me start a new project with Datameer’s end-to-end, self-service big data analytics platform running on HDInsight. The following screen shot shows the Datameer Studio:

You can also delete an installed application. In this screen shot, I had installed the Hue application, which I can easily delete by right-clicking on the application. Learn more about installing Hue here.

The following screenshot shows how to delete an application:

This is a super easy way of discovering and installing solutions that help an end-user to be more productive with Hadoop. Using the same approach, you can create custom applications and share them with your team.

Datameer has been developing this experience to make their solution easily available. According to Datameer’s Sr Product Manager of Cloud Solutions, Alejandro Malbet:

"Azure HDInsight Application Platform is the most robust and stable framework we’ve seen to quickly configure and test Datameer deployments in the cloud. We had all the flexibility to iteratively test different deployment options for our solution as well as Marketing collateral within the same Portal. By far the easiest and fastest way to take your cloud-based solution to Market."

If you would like to learn more about Datameer, please visit the Datameer listing in the Azure Marketplace.

If you have solutions that you have built for the Apache Hadoop ecosystem and would like to make them available to HDInsight, then please do read the following documentation on how to make them available.

Documentation & How-To’s

Install custom HDInsight applications

MSDN documentation on apps

Customize Linux-based HDInsight clusters using Script Action

Publish HDInsight applications into the Azure Marketplace

Summary

We hope that you find this experience an easy-to-distribute solution to increase productivity of customers using Big Data. We invite independent software vendors (ISVs) to leverage this capability to make it easier for customers to discover and use your solution. Please reach out to [email protected] if you would like to participate.

5 steps for securing the IoT using Aruba ClearPass

The content below is taken from the original (5 steps for securing the IoT using Aruba ClearPass), to continue reading please visit the site. Remember to respect the Author & Copyright.

Historically the Internet of Things (IoT) has been much more hype than substance. Sure, there have been a few verticals such as oil and gas and mining that have embraced the trend, but those vertical have been active in IoT since it was known as machine to machine (M2M).

Now, however, we sit on the precipice of IoT exploding. I’ve seen projections that by 2025, anywhere from 50 billion to 200 billion new devices will be added to the network. Which is right? Doesn’t really matter. The main point is that we’re going to see a lot devices connected over the next 10 years, and businesses need to be ready. 

+ Also on Network World: Experts to IoT makers: Bake in security +

IoT does present some unique security concerns for organizations. In fact, the most recent ZK Research Network Survey asked what the biggest impediment was to broader IoT adoption, and security ranked #1 by an overwhelming amount. Why is IoT security so difficult? It’s a fair question, as we’ve been connecting devices to our company networks for years. 

Challenges of securing IoT devices

IoT devices are different, though. First, scale is an issue. Consider a hospital where the number of connected medical devices could outnumber traditional computers and printers by a factor for 4 or 5.

Also, IoT endpoints are often the domain of the operational technology (OT) group, not IT, so there many not be any awareness from the security team that new devices are being connected.

Lastly, IoT devices can be hard to secure. Some are old, some have proprietary operating systems, some have no security capabilities and the list goes on. The main point is that these devices have either never been connected to a network before or, at most, connected to a parallel closed network where security wasn’t a concern.

Given the magnitude of IoT and the concerns regarding security, it’s safe to say that businesses need to rethink their security strategy when it comes to IoT.

Securing IoT endpoints: 5 steps

To help understand what steps need to be taken when securing IoT devices, I turned to Vinay Anand, vice president and general manager of ClearPass for Aruba, a Hewlett Packard Enterprise company. I asked him what steps organizations should take to secure IoT endpoints. Here is his advice and how Aruba ClearPass could help:

  1. Onboard the devices. There’s no single way of onboarding a device. Aruba’s ClearPass supports a wide range of methods, including 802.1X authentication with RADIUS, MAC authentication, agents, MAC plus 802.1X or captive portal.
  2. Fingerprint the devices. This step requires gathering data and understanding the behavior of the endpoint. This is a critical step in looking for breaches, as any deviation from the normal behavior could indicate malicious activity.
  3. Put the devices into a profiler. ClearPass includes a built-in profiling service that can classify the devices. A variety of contextual data can be used to profile, including MAC OUIs, DHCP fingerprinting and other identity-centric device data. Unmanaged devices can be identified as either known or unknown when they connect to the network. The identity of these devices is based on the presence of MAC addresses in a database within ClearPass. 
  4. Create a policy. A policy is only as good as the data used to build it and the tool used to enforce it. Aruba takes an ecosystem approach to policies by partnering with a broad set of technology partners, including MobileIron and Palo Alto Networks. This lets policies be applied and enforced at every level of IoT, including the device, network edge, applications and internet. This gives customers tight control over how devices operate and communicate, resulting in better containments of threats when they emerge.
  5. Monitor and analyze traffic. ClearPass pulls data out of a number of systems, including control, authentication, communication, security and management systems. Data is gathered and then analyzed for odd behavior, and the device is either removed from the network or quarantined. That would happen, for example, if a medical device attempts to communicate with an accounting server. If that occurs, it could indicate a breach. When that kind of traffic is discovered, ClearPass can disconnect the device from the network, minimizing the damage.

Adequately securing IoT devices depends on organizations being able to quickly recognize a device when it joins the network. Aruba has thousands of profiles already created, and it has an exchange for partners to create their own, adding to the list of supported devices.

Securing IoT may seem daunting, Anand said, but it doesn’t have to be if you take the right steps and use the right tools.

Windows 10 Ignoring the Hosts File for Specific Name Resolution

The content below is taken from the original (Windows 10 Ignoring the Hosts File for Specific Name Resolution), to continue reading please visit the site. Remember to respect the Author & Copyright.

Windows 10 Hero Good

Ever since the beginning of Windows and the TCP/IP protocol, name resolution of computer names has been done through several methods (knowing there are NetBIOS names and DNS names, we’ll focus on DNS names in this article). Using DNS for name resolution is the common practice nowadays, but another method of manually translating names to IP addresses has been the HOSTS file (the HOSTS file is also used in Linux/Unix and Mac systems).

Modifying the HOSTS hosts file causes your computer to look directly at the IP address specified in it. This is useful, for example, when you want to test a website before going live with a public DNS name, or when you want to prevent your computer from resolving a DNS name, thus preventing it from reaching it.

The HOSTS file located in %WINDIR%\System32\drivers\etc is a simple text file (although it does NOT have a .TXT suffix), where for what seems like ages, you could add text lines such as this one:

<IP address><space><DNS name>

And once this was saved and name resolution cache had been cleared (run ipconfig /flushdns in a Command Prompt window), your computer would resolve the DNS name to the given IP address.

To prevent the computer from communicating to any external DNS name, add the relevant name to the HOSTS file and point it to the 127.0.0.1 IP address (which is the local host itself), or to 0.0.0.0.

Sponsored

For example, adding a line to a domain name, and the subsequent PING command that follows:

hosts-file-blocked-1

hosts-file-blocked-2

While this was true for all DNS domain names, in the past several years something has changed in the way Windows translates some names. This is another one of those annoyances that can drive you crazy, trying to figure out what exactly you did wrong, where in fact it’s not a malfunction or wrong configuration: It’s not something you did wrong – it’s a built-in “feature” that was added by Microsoft a while ago (actually – in Windows XP SP2), and it’s still here in Windows 10.

These are the hardcoded DNS domain names that will resolve to their proper IP addresses regardless of what you put into the HOSTS file:

www.msdn.com
msdn.com
www.msn.com
msn.com
go.microsoft.com
msdn.microsoft.com
office.microsoft.com
microsoftupdate.microsoft.com
wustats.microsoft.com
support.microsoft.com
www.microsoft.com
microsoft.com
update.microsoft.com
download.microsoft.com
microsoftupdate.com
windowsupdate.com
windowsupdate.microsoft.com

These FDQNs are hardcoded in the following DLL:

%WINDIR%\system32\dnsapi.dll

The reason Microsoft added it is to prevent malicious software and/or people that wanted to use their computer’s HOSTS file to override some name resolution from doing so. This means that even if you edited the HOSTS file and added records to it that changed the name resolution to Microsoft’s update servers, proper name resolution would still work and the operating system would be able to go to the Microsoft update servers, allowing the OS to update itself, regardless of the changes to the HOSTS file.

Is this good or bad?

Depends on your point of view. Some may argue that if Microsoft did this, so can other companies that have their software installed on your computer: Adobe, Google, and others can be candidates (and some actually do bypass name resolution: for example, some browser makers). So today it’s Microsoft; tomorrow it can be anyone else, and soon thereafter our computer will “call home” without us being able to do anything about it. (In a way, this is exactly what’s happening now with Windows 10’s telemetry data being sent to Microsoft without us being able to really control it.)

Others may hold a different view. They claim that all Microsoft is doing is taking steps towards ensuring that users can get updates and patches to the operating system when needed, without worrying about any malicious software modifying the computers’ HOSTS file (and some do).

BTW, Windows 10 actually warns you when you (or a software) tries to modify the contents of the HOSTS file:

hosts-file-blocked-3

So how do you prevent Windows from “talking” to these FQDNs? One approach would be by creating Windows firewall rules for these specific (or other) domain names (another approach would be using third-party firewall products).

You can also use a third-party DNS proxy software that will replace Microsoft’s internal DNS client mechanism.

If you’re using a router/firewall to connect to the Internet, you can create blocking rules on that device, preventing your computer and any other operating system that’s connected to that network from connecting to these domain names.

The post Windows 10 Ignoring the Hosts File for Specific Name Resolution appeared first on Petri.

Transfer Your Evernote Notes Into Microsoft OneNote With the Importer Tool

The content below is taken from the original (Transfer Your Evernote Notes Into Microsoft OneNote With the Importer Tool), to continue reading please visit the site. Remember to respect the Author & Copyright.

If today’s news regarding Evernote’s new pricing plans and limitations has you considering Microsoft OneNote, the OneNote Importer tool makes the transition relatively painless for Windows users.

http://bit.ly/293lQOf…

Evernote’s prices might be going up, but OneNote is a very comparable app, as well as completely free to use. And if making the switch is something you might be interested in, Microsoft has a handy tool that will migrate all of your Evernote content into OneNote for free. It’s not a bad deal considering Evernote’s Premium tier now costs the same per year as a subscription to Office 365—and that includes OneNote, 1 TB of cloud storage, plus the Microsoft Office suite. You can download the OneNote Importer tool for PCs with Windows 7 or later at the link below (a Mac version is in the works).

OneNote Importer Tool | OneNote via Microsoft Office Blogs

Artificial intelligence could be used to stop car smugglers

The content below is taken from the original (Artificial intelligence could be used to stop car smugglers), to continue reading please visit the site. Remember to respect the Author & Copyright.

Chances are, you don’t spend a lot of time thinking about the logistics of international shipping — but you shouldn’t be surprised that transportation hubs are ripe for export fraud. Part of the reason for this is that there’s simply too much international cargo moved each month to be manually checked with human eyes. The solution? Teach a computer to inspect that cargo for you.

Okay, automatic, artificial intelligence cargo inspection isn’t actually a thing that’s happening right now, but research at University College London has proven that it’s a viable solution to a very real problem. A team at the school’s Department of Computer Science successfully trained a convolutional neural network to spot automobiles in X-ray images of shipping containers.

The neural network was startlingly accurate — correctly identifying cars 100-percent of the time with very few false alarms. The system even spotted cars in images that were challenging for human observers, finding the vehicles that were intentionally obscured by other objects. It wasn’t a revolutionary study, to be sure, but the project is a great example of how deep learning image recognition will be used to make our lives easier in the future. Check out the source link below for a detailed write-up of the project.

Via: Dave Gershgorn (Twitter)

Source: Arvix

Twitter launches Dashboard app for small business accounts

The content below is taken from the original (Twitter launches Dashboard app for small business accounts), to continue reading please visit the site. Remember to respect the Author & Copyright.

To help business owners connect with their fans and soothe angry patrons, Twitter is launching yet another stand-alone app with a specific audience in mind. Twitter Dashboard is the social network’s attempt to streamline engagement for business accounts, whose users probably have better things to do than babysitting their mentions or constantly searching their own name.

Dashboard lives both in a web and iOS version, and includes features that have popped up in other Twitter products in the past. In both versions, Twitter guides you through a quick process to create a custom "About You" feed tailored to show tweets about your company or business (or personal brand, as the case may be). The feed takes into account what type of business you’re running (say a restaurant or an art gallery), then combines @-mentions and keyword searches to find people talking about your brand, even if they don’t tag your handle directly. In addition to a tweet scheduling feature and a reconfigured analytics page, Dashboard also offers tips so business owners who might be new to Twitter can get the most engagement out of their tweets.

At this point, Twitter’s app ecosystem is starting to look a little fractured with the standard Twitter app, Tweetdeck for the power users and Engage for the celebrities. But, more than anything, the app lineup speaks to the range of different ways in which people actually use the social network.

Azure Enterprise State Roaming for Windows 10 Now Generally Available

The content below is taken from the original (Azure Enterprise State Roaming for Windows 10 Now Generally Available), to continue reading please visit the site. Remember to respect the Author & Copyright.

Cloud Hero Azure

Cloud Hero Azure

Microsoft has announced the release of Enterprise State Roaming for Windows 10 business customers in one of their regular Azure feature & pricing update emails. This feature brings user and app state roaming to the enterprise, similar to what consumers have had through OneDrive since Windows 8.0 and later.

The text of the announcement from Microsoft [Image Credit: Microsoft]

The text of the announcement from Microsoft [Image Credit: Microsoft]

What is Enterprise State Roaming?

This new service brings together Windows 10 and Azure Active Directory (Azure AD) to allow end users to synchronize their user settings and application settings/data across multiple devices using the power of the cloud. This is the sort of thing that users have experienced since Windows 8 if they associated their login with a Microsoft account; you change your wallpaper on a PC and, miraculously, it appears on all of your other associated devices. Microsoft wanted to bring this same sort of unified experience to enterprise users, but by using the power of the work account (an account that is synchronized with Azure AD).

Enterprises need a bit more than consumers, so Microsoft added some additional functionality:

  • A line between personal and consumer data: This is something similar to what we have seen with app control in Microsoft Intune. Organizations need control of their data, so corporate data is not in a consumer cloud and consumer data is not in an enterprise cloud account.
  • Additional security: Data does not leave Windows 10 without being automatically encrypted using Azure Rights Management Services (which will become Azure Information Protection later this calendar year). Data remains encrypted while at rest in the cloud, protecting your business from unwanted inspection or theft.
  • Management: Security is one thing, but who is doing what and where does your data reside? You have control and visibility over who is syncing data and onto what devices.

What is Synchronized?

Microsoft has published a full list of what settings can synchronize or be backed up for Windows 10 PCs – note that Windows 10 Mobile is also supported for a subset of features.

Supported devices and endpoints [Image Credit: Aidan Finn]

Supported devices and endpoints [Image Credit: Aidan Finn]

Quite a few settings can be synchronized. You can learn more using the above listing and by reading the FAQ for Enterprise State Roaming.

Availability

Enterprise State Roaming is available now, to all customers with Azure AD Premium, the per-user paid-for step from the free Azure subscription you get with Microsoft’s enterprise cloud services, such as Office 365. You can purchase Azure AD Premium through the CSP (Cloud Solutions Provider) or volume licensing channels, either by itself or as a part of the Enterprise Mobility Suite (EMS) bundle.

Sponsored

Note that you do not get Enterprise State Roaming in your Azure subscription, even though Azure powers the solution; you must step up your free Azure AD subscription to Azure AD Premium.

Note that Enterprise State Roaming is limited to a subset of Azure regions at this time, but it will probably be rolled out further over the coming months.

The availability of Enterprise State Roaming [Image Credit: Aidan Finn]

The availability of Enterprise State Roaming [Image Credit: Aidan Finn]

Sponsored

Please note that Enterprise State Roaming is not supported on Windows Server SKUs so tis might impact your design choices if you are using Windows Server licensing for cost effective (or hosted) VDI licensing. In that case, Microsoft would recommend the use of UE-V (not roaming profiles).

The post Azure Enterprise State Roaming for Windows 10 Now Generally Available appeared first on Petri.

Test Raspberry Pi Code on Your Primary Computer with VirtualBox and a Bit of Tweaking

The content below is taken from the original (Test Raspberry Pi Code on Your Primary Computer with VirtualBox and a Bit of Tweaking), to continue reading please visit the site. Remember to respect the Author & Copyright.

The Raspberry Pi is great because it’s a low cost way to test all sorts of crazy electronics ideas, but sometimes you might want to test before you test. Virtual machines are a great way to do so, and Grant Winney has a guide for setting one up using VirtualBox.

The idea here is that you don’t always want to sit around an actual Raspberry Pi testing code. A virtual machine means you can work off your primary computer, then transfer that code to the Pi later on, knowing it’ll work. In this guide, Winney uses a Debian build to get around limitations with Raspbians, installs Python, then sets up the Raspberry Pi GPIO for testing. It’s a clever way to code from your primary computer when you don’t want to set up the Pi. Head over to Winney’s site for the full guide.

How to Create a Raspberry Pi Virtual Machine in VirtualBox | Grant Winney

Rolls-Royce expects remote-controlled cargo ships by 2020

The content below is taken from the original (Rolls-Royce expects remote-controlled cargo ships by 2020), to continue reading please visit the site. Remember to respect the Author & Copyright.

Rolls-Royce isn’t limiting its robotic transportation plans to luxury cars. The British transportation firm has outlined a strategy for deploying remote-controlled and autonomous cargo vessels. It’s working on virtual decks where land-based crews could control every aspect of a ship, complete with VR camera views and monitoring drones to spot issues that no human ever could. Accordingly, Rolls is designing boats where humans wouldn’t have to come aboard. In theory, one human would steer several boats — crew shortages would disappear overnight.

The move to crew-free ships promises more than a few advantages, Rolls says. You wouldn’t need a bridge or living quarters, so you’d have much more room for the goods you’re hauling. They’d be safer and more efficient, too, since you’d cut out many human errors (not to mention the direct risks from rough weather and pirates) and streamline operations. Robotic ships might cut the number of available jobs, but they would let distant crews handle more complex tasks without being overwhelmed.

Some of Rolls’ concepts are more Star Trek than real life at the moment (its imagery includes interactive holograms), but this isn’t just a theoretical exercise. One ship, the Stril Luna, already has a smart Unified Bridge system in place for coordinating all its equipment. The aim is to launch the first remote-controlled cargo ships by 2020, and to have autonomous boats on the water within two decades. All told, civilians might only have to head out to sea for pleasure cruises.

Via: Daily Mail

Source: Rolls-Royce

Hackaday Prize Entry: A Local Positioning System

The content below is taken from the original (Hackaday Prize Entry: A Local Positioning System), to continue reading please visit the site. Remember to respect the Author & Copyright.

Use of the global positioning system is all around us. From the satnav in your car to quadcopters hovering above a point, there are hundreds of ways we use the Global Positioning System every day. There are a few drawbacks to GPS: it takes a while to acquire a signal, GPS doesn’t work well indoors, and because nodes on the Internet of Things will be cheap, they probably won’t have a GPS receiver.

These facts open up the door for a new kind of positioning system. A local positioning system that uses hardware devices already have, but is still able to determine a location within a few feet. For his Hackaday Prize entry, [Blecky] is building the SubPos Ranger, a local positioning system based on 802.15.4 radios that still allows a device to determine its own location.

The SubPos Ranger is based on [Blecky]’s entry for the 2015 Hackaday Prize, SubPos that used WiFi, RSSI, and trilateration to determine a receiver’s position in reference to three or more base stations. It works remarkably well, even in places where GPS doesn’t, like parking garages and basements.

The SubPos Ranger is an extension of the WiFi-only SubPos, based on 802.15.4, and offers longer range and lower power than the WiFi-only SubPos system. It’s still capable of determining where a receiver is to within a few feet, making this the ideal solution for devices that need to know where are without relying on GPS.

The HackadayPrize2016 is Sponsored by:

Filed under: The Hackaday Prize

OpenStack Developer Mailing List Digest June 18-24

The content below is taken from the original (OpenStack Developer Mailing List Digest June 18-24), to continue reading please visit the site. Remember to respect the Author & Copyright.

Status of the OpenStack Port to Python 3

  • The only projects not ported to Python 3 yet:
    • Nova (76%)
    • Trove (42%)
    • Swift (0%)
  • Number of projects already ported:
    • 19 Oslo Libraries
    • 4 development tools
    • 22 OpenStack Clients
    • 6 OpenStack Libraries (os-brick, taskflow, etc)
    • 12 OpenStack services approved by the TC
    • 17 OpenStack services (not approved by the TC)
  • Raw total: 80 projects
  • Technical Committee member Doug Hellmann would like the community to set a goal for Ocata to have Python 3 functional tests running for all projects.
  • Dropping support for Python 2 would be nice, but is a big step and shouldn’t distract from the goals of getting the remaining things to support Python 3.
    • Keep in mind OpenStack on PyPy which is using Python 2.7.
  • Full thread

Proposal: Architecture Working Group

  • OpenStack is a big system that we have debated what it actually is [1].
  • We want to be able to point to something and proud tell people “this is what we designed and implemented.”
    • For individual projects this is possible. Neutron can talk about their agents and drivers. Nova can talk about conductors that handle communication with compute nodes.
    • When we talk about how they interact with each other, it’s a coincidental mash of de-facto standards and specs. They don’t help someone make decisions when refactoring or adding on to the system.
  • Oslo and cross-project initiatives have brought some peace and order to implementation, but not the design process.
    • New ideas start largely in the project where they are needed most, and often conflict with similar decisions and ideas in other projects.
    • When things do come to a head these things get done in a piecemeal fashion, where it’s half done here, 1/3 over there, ¼ there, ¾ over there.
    • Maybe nova-compute should be isolated from Nova with an API Nova, Cinder and Neutron can talk to.
    • Maybe we should make the scheduler cross-project aware and capable of scheduling more than just Nova.
    • Maybe experimental groups should look at how some of this functionality could perhaps be delegated to non-OpenStack projects.
  • Clint Byrum would like to propose the creation of an Architecture Working Group.
    • A place for architects to share their designs and gain support across projects to move forward and ratify architectural decisions.
    • The group being largely seniors at companies involved and if done correctly can help prioritize this work by advocating for people/fellow engineers to actually make it ‘real’.
  • How to get inovlved:
    • Bi-weekly IRC meeting at a time convenient for the most interested individuals.
    • #openstack-architecture channel
    • Collaborate on the openstack-specs repo.
    • Clint is working on a first draft to submit for review next week.
  • Full thread

Release Countdown for Week R-15, Jun 20-24

  • Focus:
    • Teams should be working on new feature development and bug fixes.
  • General Notes:
    • Members of the release team will be traveling next week. This will result in delays in releases. Plan accordingly.
  • Release Actions:
    • Official independent projects should file information about historical releases using the openstack/releases repository so the team pages on release.openstack.org are up to date.
    • Review stable/liberity and stable/mitaka branches for needed releases.
  • Important Dates:
    • Newton 2 milestone, July 14
    • Newton release schedule [2]

  • Full thread

Placement API WSGI Code – Let’s Just Use Flask

  • Maybe it’s better to use one of the WSGI frameworks used by the other OpenStack projects, instead of going in a completely new direction.
    • It will easier for other OpenStack contributors to become familiar with the new API placement API endpoint code if it uses Flask.
    • Flask has a very strong community and does stuff well that the OpenStack community could stop worrying about.
  • The amount of WSGI glue above Routes/Paste is pretty minimal in comparison to using a full web framework.
    • Template and session handling are things we don’t need. We’re a REST service, not web application.
  • Which frameworks are in use in Mitaka:
    • Falcon: 4 projects
    • Custom + routes: 12 projects
    • Pecan: 12 projects
    • Flask: 2 projects
    • web.py: 1 project
  • Full thread

[1] – http://bit.ly/2979tQl

[2] – http://bit.ly/1ZywA8U

Researchers build a 1,000-core processor

The content below is taken from the original (Researchers build a 1,000-core processor), to continue reading please visit the site. Remember to respect the Author & Copyright.

You may have heard of many-core processors before, but you probably haven’t seen anything like this. UC Davis has developed the KiloCore, a CPU that (as the name suggests) packs a whopping 1,000 cores — extremely handy for very parallel tasks like encryption, crunching scientific data and encoding videos. And importantly, it’s not just about performance. Thanks to its ability to shut down individual cores, the chip can handle 115 billion instructions per second while using 0.7W of power. That’s enough that you could run it off of a lone AA battery, folks.

You aren’t about to see mass production. The university had IBM manufacture the chip on a relatively ancient 32-nanometer process when the industry’s newest processors are usually made using a smaller, more efficient 14nm technique. However, it raises the possibility of many-core processors finding their way into many mobile devices. They’re not universally helpful (many tasks are better-served by a few very fast cores), but they could save a lot of time when your laptop or phone would otherwise churn slowly.

Via: ScienceDaily

Source: UC Davis

Great ports we have loved

The content below is taken from the original (Great ports we have loved), to continue reading please visit the site. Remember to respect the Author & Copyright.

Hi!

You are about to activate our Facebook Messenger news bot. Once subscribed, the bot will send you a digest of trending stories once a day. You can also customize the types of stories it sends you.

Click on the button below to subscribe and wait for a new Facebook message from the TC Messenger news bot.

Thanks,
TC Team

Connecting Users to the Azure Cloud

The content below is taken from the original (Connecting Users to the Azure Cloud), to continue reading please visit the site. Remember to respect the Author & Copyright.

Cloud Hero Azure

Cloud Hero Azure

In this article I’m going to discuss the most forgotten aspect of migrating or deploying services into the cloud: Exactly how will the users connect to the services that will be running in Azure?

Wormhole Area Networking

I invented a new form of networking last year to deal with a common problem I am encountering with people who are considering deploying new services or migrating old services to the cloud. Lots of people have tried to hop onto the cloud bandwagon as it has been zooming past them. But when caught in that flash, they fail to account for considerations that they’ve been dealing with for over a decade: If I put the user in Place A and the server in Place B, with a latent network connection between them, then how will the user connect to the service?

What will the client experience be like when accessing a database over an 80 MS connection when you’ve always had less than 1 MS between the thick client application and the server?

And then I realized that these people had been planning to use WHAN … wormhole area networking. Packets would simply dematerialize from the user’s PC and rematerialize on the Azure virtual machine’s virtual NIC. Obviously, this is the way forward, so I *cough* trademarked it. Hey! If Apple can patent rounded icons then I can claim WHAN!

Wormhole Area Networking (WHAN) [Image credit: Aidan Finn]

Wormhole Area Networking (WHAN) [Image credit: Aidan Finn]

Let’s get serious. What can you do to connect users? Here are some options to consider, from the traditional to the more cloud-suitable options.

VPN

If you want to consider traditional network connectivity that is tried and trusted, then you can go with VPN. Adding a gateway to an Azure virtual network means that you can get private and secured network connections to Azure virtual machines. There are two options:

  • Point-to-site VPN: Users have the ability to VPN onto the Azure network. It’s a simple solution to set up, but users don’t do well with VPN clients, and the solution is limited to 128 connections.
  • Site-to-site VPN: Depending on the gateway you configure, you can have between 1 and 30 sites connecting into an Azure network. This means that you can have simple connectivity for users that are in one of your sites. Problems will occur with roaming users.

One of the downsides of VPN is that Microsoft cannot provide an SLA between the outside of your firewall and their data center. Another is that VPN is limited to connecting to Azure virtual networks (virtual machines).

ExpressRoute

Microsoft has partnered with other telecommunications companies to provide SLA-protected private connections between a customer and their Azure subscriptions (not just virtual networks). You can either:

  • Add Azure to your MPLS WAN
  • Connect to a point-of-presence data center, which in turn connects you to Azure

These plans are limited to a small set of service providers in a small number of markets. ExpressRoute is also very expensive.

Remote Desktop Services (RDS)

With VPN and ExpressRoute, we have a latent connection between the end user and the service. We can throw bandwidth at the problem, but that just means more old applications can have the same bad experience. My favorite method for connecting users to remote services is to use RDS. This is an old technique that:

  • Solves the latency issue by moving the client experience to the same place as the services and data.
  • Provides easy-to-use connections to services for users – no VPN client to remember thanks to an “SSL” gateway.
Sponsored

There are two ways to do deploy RDS in Azure:

  • Deploy RDS for yourself: You’ll have to deal with all the complexities of deploying an RDS farm. (Citrix seems so much easier when scaling out.) In terms of licensing, you will either need RDS CALs with Software Assurance via volume licensing or RDS SALs via SPLA-R licensing.
  • Use RemoteApp: I like RemoteApp. It’s the connectivity solution that I recommend the most. It takes RDS and makes it easy. All you need to do is supply RemoteApp with a template of a session host, and RemoteApp does the rest. The per-user cost accounts for all of your RDS virtual machine and licensing costs.

How can I forget Citrix? If you do some searching you will encounter a paper called Deploying XenApp 7.5 on Microsoft Azure Cloud. Interestingly, Citrix also found (as did Microsoft any others) that the Standard A3 virtual machine gave them the sweet spot between performance and price.

Cloud

Modern applications don’t use thick clients anymore. You either use a web browser or an app, and the connectivity between the user and the service is based on HTTP or HTTPS. Any service that is built on HTTP/S is:

  • Designed to minimize data transfer between the server and end user. This improves performance and allows for controls over data leakage.
  • Designed to scale the way the cloud intends. Sessionless web servers or Azure web application instances can be powered on and off quickly. This simplifies scale-out/in, optimizes consumption based on demand/profit, and allows for easy high availability (to qualify for the Azure SLA).
Sponsored

Summary

When you are considering the cloud as an option, you need to think of more than just virtual machines. You need to consider storage, networking, security, and those annoying people that want to use the services and data. Remember to investigate the workloads that Azure will be hosting, and plan for a suitable way for users to connect to this services.

The post Connecting Users to the Azure Cloud appeared first on Petri.

Two more things to keep your costs on track in DevTest Labs

The content below is taken from the original (Two more things to keep your costs on track in DevTest Labs), to continue reading please visit the site. Remember to respect the Author & Copyright.

A few months back, we blogged about our first cost management feature: Monthly cost trend. It allows you to see how much you have spent in the current calendar month and also shows the projection of the spending until the month’s end, based on your spending in the last 7 days.

Now, we’re moving on to answer a second question that’s likely crossed your mind: Why is lab spending so fast? In order to help you get a clear answer more productively, we have released a new feature so that you don’t have to scratch your head to find the answer. The feature makes your life easier by showing you the month-to-date cost per resource in a table.

Figure 1: Cost by resource

We have also released another feature that allows you to set the target lab cost for the current calendar month. The target cost will appear in the “Monthly estimated cost trend” chart, which will help you to track the month-to-date lab spending relative to the target cost for the current month. The projected lab cost, together with the target cost, will help you to easily identify if you are going to blow up your budget for the month.

For more details on these features and what’s coming next, please check out the post on our team blog.

Please try the new features and let us know how can we make them better by sharing your ideas and suggestions at the DevTest Labs feedback forum.

If you run into any problems with the features or have any questions, we are always ready to help you at our MSDN forum.

Announcing pricing for Google Stackdriver

The content below is taken from the original (Announcing pricing for Google Stackdriver), to continue reading please visit the site. Remember to respect the Author & Copyright.

Posted by Dan Belcher, Product Manager

We recently announced beta availability of Google Stackdriver, an integrated monitoring, logging and diagnostics suite for applications running on Google Cloud Platform and Amazon Web Services.1 Our customers have responded to the service with enthusiasm. While the service will be in beta for a couple more months, today we’re sharing a preview of Google Stackdriver pricing.

By integrating monitoring, logging and diagnostics, Google Stackdriver makes ops easier for the hybrid cloud, equipping customers with insight into the health, performance and availability of their applications. We’re unifying these services into a single package, which makes Google Stackdriver affordable, easy-to-use, and flexible. Here’s a high level overview of how pricing will work:

  • We’ll offer Free and Premium Tiers of Google Stackdriver.
  • The Free Tier will provide access to key metrics, traces, error reports and logs (up to 10 GB/month) that are generated by Cloud Platform services.
  • The Premium Tier adds integration with Amazon Web Services, support for monitoring and logging agents, alert notifications (integration with Slack, HipChat, PagerDuty, SMS, etc.), custom metrics, custom logs, 30-day log retention and more.
  • The Premium Tier will be priced at a flat rate of $8.00 per monitored resource per month, prorated hourly. Each monitored resource adds 500 custom metric time series and 10GB of monthly log data storage to an account-wide quota. Each project also receives 250 custom metric descriptors. Billable resources map roughly to virtual machine instances and their equivalents, as described here.

For more details on the Free and Premium Tiers, please refer to the Google Stackdriver pricing FAQ, and watch this blog for more exciting Stackdriver news in the coming months!



1 "Amazon Web Services" and "AWS" are trademarks of Amazon.com, Inc. or its affiliates in the United States and/or other countries.

IBM wants to sell Power servers based on OpenCompute designs

The content below is taken from the original (IBM wants to sell Power servers based on OpenCompute designs), to continue reading please visit the site. Remember to respect the Author & Copyright.

IBM is warming up to the idea of adding servers using its Power processors and the OpenCompute open design to its product portfolio.

"I’m going to bring OpenCompute servers into my portfolio at some point so that I’m offering directly to the marketplace if there’s a demand for it," said Doug Balog, general manager for Power Systems at IBM.

An OpenCompute-based Power server will be based on open designs, and provide an alternative to IBM’s integrated systems like PurePower. It’ll also provide customers more flexibility on the components used inside systems.

A Power-based OpenCompute server will also be an alternative to open server designs based on x86 chips. One target for such Power servers is hyperscale vendors, who may be looking for an alternative to Intel chips, which now dominate data centers.

To read this article in full or to leave a comment, please click here

Kellogg’s creates a new fund, 1894, to back food and related startups

The content below is taken from the original (Kellogg’s creates a new fund, 1894, to back food and related startups), to continue reading please visit the site. Remember to respect the Author & Copyright.

Instead of two scoops, here’s one big one– the Kellogg Company is launching a corporate venture arm called Eighteen94 Capital (1894) to invest in food and food-related tech startups.

The name is a nod to the year that Dr. John Harvey Kellogg and his brother W.K. Kellogg, the company’s founder, created their first decidedly low-tech cereal.

Venture investors historically ignored consumer packaged goods, but technologies from social media to molecular sensors have begun to figure more heavily in the development, manufacturing, marketing and sales of food products.

Kellogg’s effort is just the latest in a string of funds created to grab stakes in hot startups in the massive global market for food.

According to data from the U.S. Department of Agriculture, global food retail sales reach about $4 trillion annually. And packaged foods alone should generate revenue of $3.03 trillion annually by 2020, according to forecasts from Allied Market Research.

Newer venture funds specializing in food-related deals include Accel Foods, CAVU Ventures, S2G Ventures and CircleUp.

And consumer packaged goods giants who already invest in venture deals regularly include General Mills via its 301 INC fund, and the Campbell Soup Co., the sole limited partner in Acre Venture Partners.

Established tech firms are also signing deals with food and beverage makers with the likes of Canaan Partners, Andreessen Horowitz and Khosla Ventures investing in, respectively, NatureBox, Soylent and Hampton Creek Foods.

Kellogg’s worked with Touchdown Ventures in San Francisco to set up its new fund, according to 1894 Managing Director Simon Burton and Kellogg Company Vice Chairman Gary Pilnick.

Ultimately, Kellogg’s wanted to start a VC arm because, Burton said, “The rate of innovation across our industry has picked up dramatically, things are changing quickly, and investing is a great way to get a sense of what’s going to be important in the future.”

Initially, Burton said 1894 will invest in North American companies that have revenue in the $5 million to $10 million range, making everything from natural and organic foods or beverages, to new packaging materials, ingredients, or sales and marketing technologies.

In a typical deal, 1894 expects to invest $1 million to $3 million in Series A and Series B stage startups. The fund is prepared to invest up to $100 million in startups over the next four years.

Eighteen94 Capital is the venture arm of the Kellogg Co.

Eighteen94 Capital is the venture investing arm of the Kellogg Co. in Battle Creek, Mich.

“We’ll have a big focus on food without a doubt,” said Pilnick, “but we remain open to technology that helps us reach the consumer or retail partners. We want to win where the shopper shops, which sounds like an obvious thing, but there are a lot of ways to achieve that.”

The money for 1894’s deals will come from Kellogg’s corporate balance sheet.

Touchdown VC’s Managing Director Rich Grant and President Scott Lenet will continue to work with 1894 and Kellogg’s to connect the Battle Creek, Michigan company with the broader VC community and relevant food-focused acclerators. Besides linking Kellogg’s with co-investors, they said, they will also help 1894 bring in and evaluate deals, and manage due diligence reviews of startups.

Ultimately, Kellogg’s will make its own investment decisions about who they back and how much they invest, Grant emphasized.

While Kellogg’s is known as a cereal manufacturer, they also own vegetarian brands MorningStar and Gardenburger, salty snacks brands including Pringles and Austin, and myriad others.

Burton said he expects Kellogg’s depth and breadth of in-house expertise, especially relationships with and knowledge of food retailers, will draw food entrepreneurs to the new fund.

Pilnick added, “We have that Midwestern mindset of working together and partnering to get things done.”

The fund has not yet announced any deals.

Featured Image: Kellogg Co. Press Office (IMAGE HAS BEEN MODIFIED)

Brain-like computers may now be realistic

The content below is taken from the original (Brain-like computers may now be realistic), to continue reading please visit the site. Remember to respect the Author & Copyright.

Power consumption is one of the biggest reasons why you haven’t seen a brain-like computer beyond the lab: the artificial synapses you’d need tend to draw much more power than the real thing. Thankfully, realistic energy use is no longer an unattainable dream. Researchers have built nanowire synapses that consume just 1.23 femtojoules of power — for reference, a real neuron uses 10 femtojoules. They achieve that extremely low demand by using a wrap of two organic materials to release and trap ions, much like real nerve fibers.

There’s a lot of work to be done before this is practical. The scientists want to shrink their nanowires down from 200 nanometers thick to a few dozen, and they’d need new 3D printing techniques to create structures that more closely imitate real brains. Nonetheless, the concept of computers with brain-level complexity is that much more realistic — the team tells Scientific American that it could see applications in everything from smarter robots and self-driving cars through to advanced medical diagnosis.

Via: Scientific American

Source: Science Advances

DNS security appliances in Azure

The content below is taken from the original (DNS security appliances in Azure), to continue reading please visit the site. Remember to respect the Author & Copyright.

Overview

Malware and botnets (such as ZeroAccess, Conficker and Storm) need to be able to propagate and communicate. They use several communication techniques, including DNS, IRC and Peer-to-Peer networks. Normally the DNS protocol resolves human-friendly domain names into machine-friendly IP addresses. However, the fact that most organizations do not filter DNS queries means that it can be used as a covert communication channel. Data leaving a compromised system can be encoded in the DNS query and instructions can be sent back to the malware in the DNS responses without raising suspicions.

This article gives an overview of this threat and describes some ways of protecting your network from it.

How DNS works

DNS is a highly distributed system, i.e. no single server or organization has the answers to all DNS queries. The “.com” DNS servers know which Microsoft servers have DNS data for “microsoft.com” but they do not have the DNS records themselves. These authoritative DNS servers only store data for their own domains and traversal of a number of authoritative DNS servers may be required to find a particular DNS record.

When an application needs to lookup a DNS record, a query is sent to a local recursive resolver. This server navigates the hierarchy of authoritative DNS servers to find the required DNS record. This process is called recursive resolution and is usually handled by a fleet of resolvers within your infrastructure, or in this case, within the Azure infrastructure. The IP addresses of the recursive resolvers are either statically configured in the operating system (by the network admin) or dynamically configured through systems such as DHCP. Azure uses DHCP.

 

Most of the time, recursive resolvers do not filter the queries they are resolving. While your network administrator might not allow you to open a http connection to an outside resource, chances are the admin will allow arbitrary DNS requests to be resolved. Sure, what harm can a DNS query do?

DNS-based threats

The DNS system doesn’t just map domain names to IP addresses. There are a number of different DNS record types. For example, MX records are used to locate the mail servers for a domain and TXT records can store arbitrary text. Records such as TXT records make it easier for the malware to retrieve data, e.g. instructions or payload.

Bad actors can use DNS queries within their malware to contact command and control servers. The malware does a DNS query and then interprets the response as a set of instructions, such as “this target is interesting, install the key logger”. They can also use DNS queries to download malware updates and additional modules. To make them harder to block, malware often uses a domain generation algorithm (DGA) to generate a large number of new domains each day. To keep the communication channel open, attackers only need to register a small number of these domains, but, to block communications, law enforcement needs to block nearly all of them. This stacks the deck in favour of the bad guys.

Ok, so that’s inbound communication. What about exporting data from infected servers? The DNS protocol allows a domain name to be up to 253 characters long and queries for “<something>.mydomain.com” will land on the DNS servers for “mydomain.com”. Data can be exported by crafting DNS queries on the infected host (e.g. “the_password_for_joeblogs_gmail_com_is_letMeIn.mydomain.com”) and using custom DNS server software to interpret the message on the other end. This method has been generalized into the TCP-over-DNS protocol (not to be confused with DNS-over-TCP), which tunnels TCP traffic through the DNS infrastructure.

It’s important to note that the malware needs to have infected the server before it can start using these communication channels. Therefore, filtering DNS is primarily a layered defense mechanism–a mitigation for when other techniques have failed to prevent the initial infection. In desktop environments, DNS filtering can also help prevent malicious links in emails or on websites from initiating the infection process.

Best practices

There is no substitute for good security and good security always uses a layered approach. The primary focus should be on preventing malware infections and propagation. An additional layer is to monitor and/or filter DNS traffic to detect and/or block communication of malware on infected machines. A balanced defense strategy should consider the following:

  • Keep servers patched and up to date.
  • Expose only the endpoints that are truly necessary.
  • Use Network Security Groups (network ACLs) to restrict communication to/from/within your network, e.g. block DNS traffic (port 53) to servers other than trusted recursive resolvers.
  • Use firewalls (DNS, application and IP) to detect and filter malicious traffic and see 3rd-party appliances available in the Azure Marketplace.
  • Separate critical and risky workloads (e.g. don’t surf the web from your database server).
  • Run anti-virus/anti-malware on your servers (e.g. Antimalware for Azure).
  • Run a smart DNS resolver (a DNS firewall) that scans DNS traffic for malware activity. See the Azure Marketplace for available 3rd-party DNS firewalls.

While the Azure infrastructure provides the core set of security features, Azure is also building a large ecosystem of 3rd-party security products. They’re available through the Azure Marketplace (e.g firewall, waf, antivirus, DNS firewall) and can be deployed with just a few clicks. Many offer a free trial, after which they can be billed either directly through the supplier or hourly through your Azure subscription.

A growing trend is for enterprises to deploy DNS firewalls in their infrastructure and we’ve started adding 3rd-party DNS firewalls to the Azure Marketplace. These are special DNS servers that inspect DNS queries for signs of malware activity and alert and/or block the traffic. For example, a query to a command and control (C&C) server can be identified by either the domain being queried or the IP address of the DNS server. A DNS firewall is deployed as a DNS server within your virtual network and often uses a threat intelligence feed to keep up to date with the changing threat landscape. So your virtual network will look something like this: