How virtualizing BLE Beacons will change the indoor mobile experience

The content below is taken from the original (How virtualizing BLE Beacons will change the indoor mobile experience), to continue reading please visit the site. Remember to respect the Author & Copyright.

This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.

Thanks to cellular GPS, the days of pulling your car over to ask for directions are long gone. It has never been easier to find your way from point A to B and to track down nearby points of interest like restaurants or gas stations.

But, what happens when you walk indoors? The “blue dot” navigation experience doesn’t exist. When inside a mall, conference center, or office complex, you are back to stopping and asking for turn-by-turn directions when needed. 

There is enormous demand for an indoor location experience that is on par with outdoor cellular GPS. Bluetooth Low Energy (BLE) is an exciting technology that promises to satisfy this demand. The major mobile device manufacturers have put their weight behind BLE beaconing standards and a robust BLE ecosystem has emerged to develop indoor location solutions. But two things have held BLE indoor location services back to date:

  • The high cost of overlay networks.
  • Complicated deployment and operations.

These issues have primarily been due to the fact that BLE location services require the deployment of battery-powered physical beacons, which are difficult to deploy and manage.  Fortunately, the recent introduction of new virtual beacon technology changes all that. With virtualization, BLE location services are finally ready for mass market adoption. Here’s how.

Simplified deployments using virtual beacons

BLE physical beacons are small battery operated devices that are attached to a wall or ceiling, usually about 30-50 feet apart. They broadcast BLE signals typically at -10dBm to 4bdBm of power at intervals typically ranging from .1 to 10 beacons per second. Each physical beacon must be configured and mounted manually, with extensive site surveys required for proper placement and calibration.

They are powered by batteries that can last from months to years depending on usage (stronger signals and more frequent intervals result in lower battery lives). When the battery dies on these devices, they must be found and replaced. In large venues, this can be a challenging and expensive feat, especially if beacons were lost or moved (intentionally or otherwise). For these reasons, many companies have shied away from using physical battery beacons, which has hampered the widespread deployment of BLE.

Converging BLE functionality into existing Wi-Fi networks and virtualizing the physical beacon functionality allows companies to bring indoor location functionality to their business and customers. In other words, BLE broadcast functions are moved into the standard IT infrastructure – i.e. BLE antenna are added to a Wi-Fi Access Point or deployed as a dedicated BLE-only “beacon point” that are mounted on the celling and powered via Ethernet, eliminating the need for wall-mounted beacons with batteries. These Access/Beacon Points leverage directional antennas powered by a single Bluetooth transmitter sending unique RF energy in multiple directions.

These beacon points create a flashlight-like beam with more energy pushed in front of the directional antenna than out the back or to the sides. The energy forms power distribution much like an ellipse. A probability weight is then assigned to each point in the location map. The further the expected signal strength from the measured signal strength, the lower the probability the device is at that location. By combining and then analyzing probability surfaces for every directional beam, the most likely location of a device is determined with exceptional accuracy.

Unsupervised machine learning in the cloud eliminates site surveys and ensures consistent user experience across mobile devices and space; the RF environment is constantly learned in real time. RF models (e.g. path loss formulas) are constantly updated in accordance with environmental changes, eliminating the need for site surveys and manual calibration while maximizing BLE performance.

With BLE broadcast functions moved into Access/Beacon Points and location services handled in the cloud, there is no longer a need for physical BLE beacons. Virtual beacons can be added and moved anywhere on a floor using a software UI or programmable workflows. Power and interval settings can be configured and adjusted remotely (see figure below). In addition, different organizations can manage and operate their own beacons in the same venue, with an unlimited number of beacons available for deployment.

Virtual beacon MIST

 Power and interval settings for virtual beacons can be configured and adjusted remotely. 

In summary, virtual beacons offer many advantages over physical beacons, which include:

  • No batteries.
  • Beacons are easy to setup and move.
  • No risk of loss or theft or movement from a beacon’s original position.
  • Building aesthetics are not affected by the deployment of physical devices.
  • Virtual beacons are stackable so different applications and tenants can get different messages.
  • No site surveys or ongoing calibration required.

Do virtual beacons eliminate the need entirely for physical BLE beacons? While this is possible in theory, physical beacons still make sense in areas that are hard to reach from traditional WLAN access points. For example, rooms with high ceilings (like an atrium) still benefit form physical beacons, as do outdoor facilities or very high density environments that require accuracy within one to three meters.

BLE has the ability to deliver amazing new indoor location-based experiences that are on par with outdoor GPS. By converging it with Wi-Fi and using machine learning in the cloud to optimize location performance, BLE is easier than ever to deploy. In addition, beacons can now be virtualized for simple moves, adds and changes with no costly site surveys or manual calibration.

The world is ready for new indoor location experiences. With virtual BLE, mass market adoption is just a few clicks away.

Join the Network World communities on

Facebook

and

LinkedIn

to comment on topics that are top of mind.

User Group Newsletter March 2017

The content below is taken from the original (User Group Newsletter March 2017), to continue reading please visit the site. Remember to respect the Author & Copyright.

User Group Newsletter March 2017

 

BOSTON SUMMIT UPDATE

Exciting news! The schedule for the Boston Summit in May has been released. You can check out all the details on the Summit schedule page.

Travelling to the Summit and need a visa? Follow the steps in this handy guide, 

If you haven’t registered, there is still time! Secure your spot today! 

 

HAVE YOUR SAY IN THE SUPERUSER AWARDS!


The OpenStack Summit kicks off in less than six weeks and seven deserving organizations have been nominated to be recognized during the opening keynotes. For this cycle, the community (that means you!) will review the candidates before the Superuser editorial advisors select the finalists and ultimate winner. See the full list of candidates and have your say here. 

 

COMMUNITY LEADERSHIP CHARTS COURSE FOR OPENSTACK

About 40 people from the OpenStack Technical Committee, User Committee, Board of Directors and Foundation Staff convened in Boston to talk about the future of OpenStack. They discussed the challenges we face as a community, but also why our mission to deliver open infrastructure is more important than ever. Read the comprehensive meeting report here.

 

NEW PROJECT MASCOTS

Fantastic new project mascots were released just before the Project Teams Gathering. Read the the story behind your favourite OpenStack project mascot via this superuser post. 

 

WELCOME TO OUR NEW USER GROUPS

We have some new user groups which have joined the OpenStack community.

Spain- Canary Islands

Mexico City – Mexico

We wish them all the best with their OpenStack journey and can’t wait to see what they will achieve! Looking for your local group? Are you thinking of starting a user group? Head to the groups portal for more information.

 

LOOK OUT FOR YOUR FELLOW STACKERS AT COMMUNITY EVENTS
OpenStack is participating in a series of upcoming Community events this April.

April 3: Open Networking Summit Santa Clara, CA

  • OpenStack is sponsoring the Monday evening Open Source Community Reception at Levi Stadium
  • ldiko Vancsa will be speaking in two sessions:
  • Monday, 9:00-10:30am on “The Interoperability Challenge in Telecom and NFV Environments”, with EANTC Director Carsten Rossenhovel and Chris Price, room 207
  • Thursday, 1:40-3:30pm, OpenStack our Mini-Summit, topic “OpenStack:Networking Roadmap, Collaboration and Contribution” with Armando Migliaccio and Paul Carver from AT&T; Grand Ballroom A&B

 

April 17-19: DockerCon, Austin, TX

  • Openstack will be in booth #S25

 

April 19-20: Global Cloud Computing Open Source Summit, Beijing, China

  • Mike Perez will be delivering an OpenStack keynote

 

OPENSTACK DAYS: DATES FOR YOUR CALENDAR

We have lots of upcoming OpenStack Days coming up:

Upcoming OpenStack Days

June 1: Australia

June 5: Israel

June 7: Budapest

June 26: Germany Enterprise (DOST)

Read further information about OpenStack Days from this website. You’ll find a FAQ, see highlights from previous events and an extensive toolkit for hosting an OpenStack Day in your region. 

 

CONTRIBUTING TO UG NEWSLETTER

If you’d like to contribute a news item for next edition, please submit to this etherpad.

Items submitted may be edited down for length, style and suitability.

This newsletter is published on a monthly basis.

 

 

 

Y’know CSS was to kill off HTML table layout? Well, second time’s a charm: Meet CSS Grid

The content below is taken from the original (Y’know CSS was to kill off HTML table layout? Well, second time’s a charm: Meet CSS Grid), to continue reading please visit the site. Remember to respect the Author & Copyright.

Y’know CSS was to kill off HTML table layout? Well, second time’s a charm: Meet CSS Grid

Browser makers unite to make web design great again

gss_grid_examples

http://bit.ly/2oFJNRb

With the release of Safari 10.1 this week, four major browsers in the space of a month have implemented support for CSS Grid, an emerging standard for two-dimensional grid layouts in web applications.

For front-end web designers, this is a big deal. In a tweet, Eric Meyer, web development author and co-founder of An Event Apart, said, “Four browsers from four vendors rolled out Grid support in the space of four weeks. That’s just stunning. Never been anything like it.”

CSS Grid debuted on March 7 in Firefox 52, on March 9 (desktop) and March 27 (Android) for Chrome 57, on March 21 for Opera 44, and March 27 for Safari 10.1.

And one day after Meyer’s tweet, a fifth browser, Vivaldi added CSS Grid support.

When CSS Grid support landed in Firefox, web developer Rachel Andrew remarked, “A specification of this size has never landed like this before, shipping almost simultaneously in almost all of our browsers. It’s a shame that Edge decided not to join the party, that really would have been the icing on this particular interoperability cake.”

Why all the enthusiasm? Back in December, when CSS Grid surfaced in developer builds of Firefox, web designer Helen V. Holmes in a Mozilla blog post explained that CSS Grid has the potential to change the way layouts are done by making web app code less fragile, more streamlined, and easier to maintain.

“Grid allows users to decouple HTML from layout concerns, expressing those concerns exclusively in CSS,” she said. “It adapts to media queries and different contexts, making it a viable alternative to frameworks such as Twitter’s Bootstrap or Skeleton which rely on precise and tightly coupled class structure to define a content grid.”

CSS Flexible Boxes are often cited as an alternative approach, but flexbox layouts extend along a single-axis – think of a linear sequence of content containers – whereas CSS Grid layouts allow content to be aligned vertically and horizontally.

To demonstrate the efficiency of CSS Grid designs, web developer Dave Rupert recently refactored a 50 line flexbox grid into 5 lines of CSS code.

Mozilla designer advocate Jen Simmons has a website that demonstrates some layouts that CSS Grid make possible.

In a phone interview with The Register, Meyer attempted to explain the excitement. “I think honestly it’s the first time CSS has actually had a real layout system,” he said.

Meyer said he expects people will add CSS Grid support to layout frameworks like Bootstrap, “but it’s not really needed.”

Meyer observed that the arrival of CSS Grid in five browsers in the space of a month wasn’t accidental. “It’s because the browser makers have been working together,” he said. “There was such interest and such sharing that Jen Simmons has called it, ‘a new day in browser collaboration.’ It was a race where all the racers helped each other when they stumbled.”

Well, almost all. Microsoft, which proposed the initial specification several years ago, only supports an older version of the spec in its Edge and Internet Explorer browsers. It’s still limping along, trying to get to the finish line. So much for tech industry kumbaya. ®

Forget robot overlords, humankind will get finished off by IoT

The content below is taken from the original (Forget robot overlords, humankind will get finished off by IoT), to continue reading please visit the site. Remember to respect the Author & Copyright.

Something for the Weekend, Sir? Car horns symphonise accompanied by a chorus of yelling cyclists as I shimmy on foot through oncoming traffic. Strictly, I come dancing on to the tarmac, cavorting between the lanes, prancing out of the way of motorbikes and generally tripping the traffic light fantastic.

Moments earlier, I had been cutting capers along the pavement, trying to dodge the shuffling dead of oncoming pedestrians whose universal attention was buried six foot deep into their smartphones as they zig-zagged directly into my path one after another every 1.5 seconds.

Brains? I don’t think so.

To be fair, some of them are evidently tourists being led in circles around London by Google Maps’ illogical walking directions or, in the case of Apple Maps, wondering why they are being led through a street plan of Inverness. The rest are local zomboids with their heads down – checking messages, sending emails, reading fake news and trying to work out why their free Spotify accounts insist on playing every song in existence apart from the very one they chose to listen to.

Such creatures no longer trouble me as I have learnt to predict the snaking trajectory of their communal stagger. What forced me off the kerb and into the road this time wasn’t human.

Recent events in London, Nice and elsewhere demonstrate that keeping to designated walkways is no protection from a determined motorist with a pot belly and a dark agenda. I’m still curious to learn why last week’s fat bastard was referred to as a “bodybuilder” when evidently his supplier of nutritional supplements was not so much Holland & Barrett as Ben & Jerry’s.

But I digress. Here I am, a pedestrian forced into the road against my will – not by my fellow negligent pavement-bashers or by me recoiling at the stench of their rotting intellect but by a humble robot.

As I have remarked before, although pavements were intended for use by people making their way about on foot, they are getting to be increasingly congested with wheeled vehicles, with the addition of scooter-riders, skateboarders, cyclists and motorised disability vehicles to the regular throng.

Now it looks as if the threat of delivery bots being added to this list is becoming very real indeed, because that’s what caused me to stumble into the road.

Foolish me – there was no need for me to take evasive action. All I needed to do was stand still and the bot would have driven around me.

I’ll remember that in future. I look forward to stopping in my tracks every time a trundling pedal bin on wheels and waggling antenna comes my way. It’ll only add a quarter of an hour to every journey I make on foot, no worries.

What I do worry about is the welfare of these poor little delivery bots. I confidently forecast vast numbers of them to get kicked, bricked or nicked before they ever reach their allocated destinations.

A colleague confirmed as much when he spotted this cute Starship bot making its way along a suitably open pedestrian zone.

According to the fellow who was trailing the device during its test run, they had considered but rejected the idea of building solar panels into the top. Why? Because they expect the bots to get physically abused from time to time, and it would be cheaper to replace broken plastic lids than smashed solar panels.

I imagine that’ll turn out to be the least of their troubles when such things eventually swarm our granite slabs. With UK internet retail sales currently worth more than £133bn annually, and let’s say a quarter of these are small goods that could be delivered by robot, you’d only have to break into 1 per cent of the automated courier bots to rake in £325m of stolen goods per year.

Perhaps they should call it a stand-and-delivery service as highway robbery returns to civilised streets after a hiatus that lasted hundreds of years. There’s market disruption for you.

And with £0.3bn on offer, that’s a lot of lupins for the picking by a determined digital Dennis Moore.

Youtube Video

But surely, you cry, a delivery bot is a mobile safe on wheels, built to be difficult to crack. Besides, aren’t they designed to emit a piercing alarm when interfered with?

Great. Not only will I have to dance around avoiding the little buggers every few paces, 1 per cent of them will run around screaming like two-year-olds – which as every parent knows is approximately measurable at 172dB.

Even so, there are less crude approaches that digital highwaymen can take than the crowbar. David Jinks, head of consumer research at delivery broker ParcelHero, reckons criminals might try using EMI jammers to cut off a delivery bot’s signals as it passes and whisk it away before either the owners or the robot itself knows what’s going on.

“By diverting delivery drones into Faraday cage-style boxes,” he says, “the modern-day highwayman will be able to block tracking signals and webcam pictures indicating where the delivery has been taken.”

That’s possible, I suppose, but surely even more likely is that he’ll hire a couple of programmers to hijack the bot in the easiest way: simply break into its IoT-enabled software.

As we’ve seen time and time again, the Internet of Things is demonstrably as robust and secure as a kitten crossing a motorway. If you can effortlessly take over an industrial dishwasher or change the ambient temperature in someone else’s car, I hardly think a mobile beer cooler will present much more of a challenge.

Once the dark forces of criminal behaviour enter the scene, I can see a day when these robots will get hijacked by zombified smartphone-enhanced highwaymen on every corner and eventually drive the rest of us off the kerb and into oncoming traffic – both literally and metaphorically.

Oh that’s just dandy, highwayman. Thanks.

Robots won’t kill off humankind. IoT will do that.

Youtube Video

Alistair Dabbs

is a freelance technology tart, juggling tech journalism, training and digital publishing. He would prefer to share footspace with a trundling delivery bot to the daily risk of being scalped by a failing delivery drone tumbling from the sky. Besides, what’s the point of robbery when nothing is worth taking? As a great man once said: Da diddly qua qua.

G Suite vs. Office 365 cloud collaboration battle heats up

The content below is taken from the original (G Suite vs. Office 365 cloud collaboration battle heats up), to continue reading please visit the site. Remember to respect the Author & Copyright.

CIOs and IT managers are increasingly adopting Microsoft’s Office 365 and Google’s G Suite for collaboration, productivity and messaging. These cloud-based productivity suites are expanding, gaining new feature sets and new apps for enterprise users. Earlier this month, both Google and Microsoft introduced chat-based collaboration apps to reposition for competition in this fast evolving and hotly contested space.

Microsoft’s Teams, which has been in beta since November, was released for general availability for Office 365 customers. And Google introduced a rebuilt Hangouts, which has been split into two apps — Hangouts Chat for chat-based communications and Hangouts Meet for audio and video conferencing.

Here’s a look at how these offerings compare, who has market momentum and the challenges that lie ahead.

Differing approaches to communications

While there’s redundancy in features between Teams and Hangouts, Microsoft and Google have very different strategies. “Teams blends multiple collaboration modalities into a single interface and that helps workers who want to reduce all the context switching into and out of several other applications in order to perform their work,” says Adam Preset, research director at Gartner. “Being able to handle more asynchronous conversations and tasks and synchronous interactions in one place makes it easier find the work you need to do and then execute on it,” he says. Hangouts, on the other hand, will be two tightly integrated experiences that could help workers “silo their work, if that is their wish,” Preset says.

[ Related: Microsoft Teams readies for battle in highly contested collaboration space ]

“Both ways of handling collaboration and communication can work,” he says. “Depending on context, sometimes you want a Swiss Army knife. Other times, you want a scalpel. You wouldn’t want surgery performed with the first and you wouldn’t want to try to whittle with the second.”

Slack and other team communication toolshave fostered a highly competitive market for enterprise communications, but the benefits of these apps are still very much up for debate. “The most compelling features of chat-based tools are that the user doesn’t have to leave where they’re working to move to another app and the theory is that it helps productivity,” says Patrick Moorhead, president and principal analyst at Moor Insights & Strategy. “The jury is still out whether it really improves productivity. I like the thesis, but the benefits aren’t clear. Instead of having 500 emails, one could have 500 notifications from chat and that accomplishes little.”

Teams and Hangouts are more flexible and turnkey systems that don’t require much specialized hardware, which is a big advantage relative to dedicated telepresence systems, according to Jan Dawson, chief analyst and founder of tech research firm Jackdaw. “There’s a narrative that these chat apps can somehow replace email, whereas in my experience they often just become yet another communication channel to manage and keep an eye on, so the merits are at least somewhat debatable,” he says.

“Though some employees embrace them and some company cultures have changed enough to make the adoption of chat-based communication a much bigger part of internal communication, there’s always a lot of resistance too, and getting buy-in from employees can be tough,” Dawson says. However, Teams and Hangouts will do well in organizations that already use other productivity apps provided by Microsoft and Google, primarily because the integration benefits are much stronger than in standalone apps such as Slack, although it does have some integrations with third-party services, he says.

[ Related: Google splits Hangouts in half ]

Market footprints give Google and Microsoft an advantage, but the enterprise communication space will not be a winner-takes-all scenario, according to Raul Castanon-Martinez, senior analyst at 451 Research. “Companies could provide Teams or [Hangouts] but workers could still choose to use Slack, for example,” he says. “It might seem counterintuitive but an open platform is in the best interest of Google and Microsoft, and they will benefit from developing an ecosystem that will open the doors to more applications that can add value to their own set of tools.”

Microsoft leads in momentum

[ Related: Google gobbles up more big-name cloud customers ]

While both companies are competing to expand their presence in the enterprise, Microsoft has been in the enterprise game for decades and Google is a relative newcomer, according to Castanon-Martinez. And Microsoft’s lead is reflected in customer numbers reported by the companies. Microsoft says 85 million people use Office 365 on a monthly basis. G Suite currently has 3 million paying business customers and three of its apps are on more than a billion smartphones, according to Google.

“Office 365 is leading enterprise collaboration and productivity by orders of magnitude,” Moorhead says. “Microsoft has the blessing of a massive installed base and are taking advantage of that. Google started with a consumer base and are adding new customers, but the starting points are quite different.”

“Microsoft has always captured the vast majority of spend in this space and that will likely continue to be the case even as Google makes some modest inroads,” Dawson says. Google has some momentum, but Office 365 has the legacy Office base to sell to and is growing much faster in absolute terms because of that foundation, he says.

Microsoft is “simply the default in many enterprises and that has huge power,” Dawson says. “Google’s strength is its flexibility and its web-first focus, which is a better fit for many smaller, nimbler businesses and younger workforces.”

Comprehensive vs. best of breed

The prevailing value proposition of Office 365 and G Suite is the comprehensive set of tools that each offer to their respective customers. “Both vendors provide a broad set of tools, but not each tool in each suite is necessarily best-of-breed,” Preset says. “Sometimes you just need a tool to be good enough. However, looking at each suite, buyers do prefer some critical mass. The more compelling each element of the bundle is individually, the less an IT buyer has to seek and spend money on third-party solutions to compensate for weaknesses in the bundle.”

The best-of-breed approach is more theoretical than actual because APIs for third-party integration are always going to be hampered to some extent, according to Moorhead. “It’s very important to have a holistic solution as users flow back and forth between experiences and there’s only one throat to choke,” he says.

Google and Microsoft have a wide range of applications, many of which could be considered best in class, but achieving that level across all apps is unlikely, according to Castanon-Martinez. “The challenge will be to ensure that the synergy between apps effectively delivers value to the end user,” he says. “The whole should be more than the sum of its parts.”

The problem with point solutions is that IT professionals have to stitch various apps together and that presents accountability issues when things go wrong, Dawson warns. Many IT departments would prefer to have “one throat to choke, which is harder to have with a piecemeal solution,” he says. “Even pre-built integrations can be a hassle to implement and manage, and if there’s any kind of custom element required they often fall apart.”

Challenges ahead for G Suite and Office 365

Microsoft and Google face distinct challenges as they expand and evolve their respective apps for work in 2017, according to Castanon-Martinez. “Google is in the process of proving itself in the enterprise space,” he says. “They have a solid product in G Suite and are making strides with large deployments, but still need to work on developing a partner network and building their image as an enterprise software vendor.”

Meanwhile, over the years Microsoft has accumulated many collaboration and productivity tools that are “confusing and redundant,” he says. “Teams could be an opportunity to consolidate and streamline these tools under a single unified interface.”

[ Related: Google for Work vs. Microsoft Office 365: A comparison of cloud tools]

Microsoft must also overcome perceptions that Office 365 is “either too bloated or not functional enough,” Dawson says. Google’s challenge will be confronting negative perceptions of its web-based approach and how well that might work offline, he says.

“The positives of G Suite are that it starts from a cloud-first standpoint and what you get is an experience that reflects that, with responsiveness when connected to a good connection,” Moorhead says. G Suite is simple, but it also lacks many of the features and options of Office 365, he adds.

“Office 365 is also certified on the most stringent security measures for every country,” Moorhead says. “The biggest issue Office 365 has is the perception that it’s legacy and old, which just isn’t the case.”

This story, "G Suite vs. Office 365 cloud collaboration battle heats up" was originally published by
CIO.

Join the Network World communities on

Facebook

and

LinkedIn

to comment on topics that are top of mind.

How to leak data from an air-gapped PC – using, er, a humble scanner

The content below is taken from the original (How to leak data from an air-gapped PC – using, er, a humble scanner), to continue reading please visit the site. Remember to respect the Author & Copyright.

Cybercriminals managed to infect a PC in the design department of Contoso Ltd through a cleverly crafted spear-phishing campaign. Now they need a way to communicate with the compromised machine in secret.

Unfortunately, they know Contoso’s impenetrable network defenses will detect commands sent to their malware.

To avoid detection, they have to send data through a channel not monitored by the company’s IT security system, the Hyper IronGuard WallShield 2300, with its “military-grade” two-ply data leakage protection technology.

They consider several potential covert transmission techniques – inaudible sound, modulated light, even thermal manipulation of hardware – but none of these appear to be practical given their budgetary limitations and modest intellects.

Then one member of the three-person group recalls hearing about a security paper, “Oops!…I think I scanned a malware” [PDF], published earlier in March by researchers from two Israeli universities, Ben-Gurion University of the Negev and the Weizmann Institute of Science.

The other hackers are skeptical at first, but as they learn about the proposed technique, they become more open to trying it, particularly because it can be done with a drone. All of them love drones.

Scanner used to communicate with malware

The researchers, Ben Nassi and Yuval Elovici from Ben-Gurion University and Adi Shamir from the Weizmann Institute, describe a method for creating a covert communication channel between a compromised computer inside an organization and a scanner on the same network that happens to be near an external window.

The technique involves shining an external light, such as a laser or an infrared beam, through the window (or hijacking a manipulable internal light source) so that the illumination alters the scanner output to produce a digital file containing the desired command sequence.

To do so, the light must be connected to a micro-controller that modulates the binary-encoded commands from the server into light flashes that register with the scanner’s sensors.

“Since the entire scanning process is influenced by the reflected light, interfering with the light that is illuminated on the pane will result in a different electrical charge which will therefore be parsed to a different binary representation of the scanned material,” the paper explains.

The researchers describe setting a drone to hover outside a third-floor office window at a time when installed malware in the target organization had been instructed to begin scanning. With a transmission rate of 50 milliseconds per bit, they infiltrated the command “d x.pdf” to delete a test PDF file. The command sequence took 3.2 seconds to transmit using a laser mounted on the drone.

The cyber thieves spend several days preparing to carry out their plan. But during the final rehearsal, one of them realizes it won’t work because the attack requires the scanner to be at least partially open to register incoming light.

Although Contoso’s precious secrets remain beyond their reach, all three soon get recruited by a Silicon Valley drone startup focused on pet transportation. ®

Solution guide: Archive your cold data to Google Cloud Storage with Komprise

The content below is taken from the original (Solution guide: Archive your cold data to Google Cloud Storage with Komprise), to continue reading please visit the site. Remember to respect the Author & Copyright.

By Manvinder Singh, Strategic Technology Partnerships, Google Cloud

More than 56% of enterprises have more than half a petabyte of inactive data but this “cold” data often lives on expensive primary storage platforms. Google Cloud Storage provides an opportunity to store this data cost-effectively and achieve significant savings, but storage and IT admins often face the challenge of how to identify cold data and move it non-disruptively.

Komprise, a Google Cloud technology partner, provides software that analyzes data across NFS and SMB/CIFS storage to identify inactive/cold data, and moves the data transparently to Cloud Storage, which can help to cut costs significantly. Working with Komprise, we’ve prepared a full tutorial guide that describes how customers can understand data usage and growth in their storage environment, get a customized ROI analysis and move this data to Cloud Storage based on specific policies.

Cloud Storage provides excellent options to customers looking to store infrequently accessed data at low cost using Nearline or Coldline storage tiers. If and when access to this data is needed, there are no access time penalties; the data is available almost immediately. In addition, built-in object-level lifecycle management in Cloud Storage reduces the burden for admins by enabling policy-based movement of data across storage classes. With Komprise, customers can bring lifecycle management to their on-premise primary storage platforms and seamlessly move this data to the Cloud. Komprise deploys in under 15 minutes, works across NFS, SMB/CIFS and object storage without any storage agents, adapts to file-system and network loads to run non-intrusively in the background and scales out on-demand.

Teams can get started through this self-service tutorial or watch this on-demand webinar featuring Komprise’ COO Krishna Subramanian and Google Cloud Storage Product Manager Ben Chong. As always, don’t hesitate to reach out to us to explore which enterprise workloads make the most sense for your cloud initiatives.

Building a Location Beacon Using Microsoft Flow, Power BI, and Azure Functions

The content below is taken from the original (Building a Location Beacon Using Microsoft Flow, Power BI, and Azure Functions), to continue reading please visit the site. Remember to respect the Author & Copyright.

Okay, I admit that the title of this post is a bit 1984’ish. Trust me, that is not the intention, so let me explain this a bit before showing you how to build a production-ready solution using Office 365 and Microsoft Azure for locating your dearest and nearest colleagues when you’re out and about.

 

 

The Premise

I run my own company, Onsight Helsinki, and we’re a team of about 30 people and counting. One of the major philosophies that I try to cultivate within our company culture is flexibility and freedom, along with the corresponding responsibility. What this boils down to is the fact that often our people are scattered around Finland and surrounding countries with their projects, classroom training, and those endless Skype conference calls.

We still have physical desks. Everybody has a dedicated desk because I felt it would make people feel more comfortable on those occasional days when they do visit the office.

So, the view when I (rarely) visit the office is something like in the picture below.

Where’s everybody? I can always check Skype for Business, but it doesn’t help me much.

Armed with this dilemma, I started building a solution that would allow me to provide a way for people to post their location, either semi-automatically or as a fully automated solution. You might call it a tracking solution, but I’d be inclined to call this a voluntary “hey, I’m currently here” location beacon. It could just be that a team member is currently staying in the same hotel as I am, unbeknownst to me. This has happened for us more than once.

The Solution

First off, I wanted my solution to contain as little custom code as possible. Not because I dislike writing code, but because for such a small solution as this a huge amount of custom code tends to distract us in the long run. It’s also easier for someone else to pick up from here and expand the solution. We built a more one-off solution than a platform for this very specific need.

I started drawing out the overall architecture on a napkin while having solitary lunch in early January. Unfortunately, the napkin did not survive.

I wanted people to have a mechanism for checking in — wherever they are. When they’ve checked in, we record the GPS latitude and longitude coordinates of the device and save them to a shared repository. That would be a SharePoint Online-based custom list as it’s so easy to use, and it’s very flexible should the needs of the solution grow in the future.

From a SharePoint-based list, we can then pick the location metadata and plot people’s location on a map, which I can easily create with Power BI. The same Power BI visualization is also available through the Power BI mobile app for a quick glance of everyone’s whereabouts.

For people to check in using the solution, we decided to use Microsoft Flow — it has a handy button-based triggering mechanism and it can retrieve the latitude and longitude coordinates automatically from the device. With a recent update, Flow can also capture mandatory metadata as part of the trigger event — such as the person’s name.

Power BI is more than happy with latitude and longitude values, but it’s less useful for human eyes. In case you just want to produce a report of everyone’s locations — say, a street address perhaps — I’m employing Azure Functions. With this serverless approach, we have a smart and endlessly flexible solution to gather more data based on simple latitude and longitude value.

Building the Solution

Microsoft Flow is highly useful for building simple or more complex workflows without much deliberation. The process I built is rather simple, and it looks like this:

The Flow is triggered manually on the mobile device through a button, which is visible on a mobile device. This is step 1 in the Flow above. The button also captures the user’s first and last name, which we’ll use to differentiate between multiple check-ins.

During step 2, we’re calling our custom Azure Function, which is used to gather more metadata besides the crude location of the device. This is also a great extension point for future changes, should we have a need for something more elaborate.

Google Maps provides a nifty set of APIs that help us achieve just this. By calling the Google Maps Geocode API, I can pass on a latitude and a longitude and get Google Maps based data in return. For more information on this API, see here.

Unfortunately, Microsoft Flow is somewhat lacking in its capability to call external APIs with finely tuned parameters and contracts, so this is where Azure Functions comes into play.

We created a simple C#-based Azure Function, that takes a latitude and longitude parameter in the HTTP request, passes them on to Google Maps Geocode API with my developer key, and gets the street address (and a lot of other data that I might need later on) back. The Flow calls my function with the mobile device’s latitude and longitude coordinates, which subsequently passes them on to Google Maps, and we get information back.

Sponsored

The relevant portion of the code is here:

// parse query parameter
string latlon = req.GetQueryNameValuePairs()
.FirstOrDefault(q => string.Compare(q.Key, "latlon", true) == 0)
.Value;

// Set name to query string or body data
latlon = latlon ?? data?.latlon;

var URI = "http://bit.ly/2oDmHL3; + latlon + "&key=<InsertDeveloperKeyHere>";

// // call Google LatLon API
var request = System.Net.WebRequest.Create(URI) as HttpWebRequest;
request.Method = "POST";
request.ContentType = "application/json";
request.ContentLength = 0;
WebResponse response = request.GetResponse ();

StreamReader reader = new StreamReader(response.GetResponseStream());
string json = reader.ReadToEnd();

We can now deserialize the response data (which is a JSON dataset essentially), and look up any information we might need. We’re picking the formatted street address for now, and storing that to the SharePoint list at step 3.

Flow Designer makes it immensely easy to get data from an external API and simply use the response when populating data back to SharePoint Online.

 

This is the raw data that Flow is able to capture from mobile devices, that are then stored in the SharePoint list.

 

The Location column holds the data from Google Maps, and Lat and Lon columns contain the exact coordinates for the mobile device at the time of the check in. In addition, we’re using a timestamp column (the Check-in time-column) to filter out older check-ins.

On my Google Nexus 6P, an Android-based phone, the experience looks like this:

Finally, putting the data in a visualization with Power BI is easy. Power BI can employ data from SharePoint Online, and as we have the exact coordinates, we can use a Map control to visualize location data.

In the picture above you can see I’ve checked in from Dublin, and my colleague Heidi checked in from Espoo, a city next to Helsinki in Finland.

Sponsored

Final Thoughts

It took me about 4 hours to build the end-to-end solution. The trickiest part was to build the Azure Function, as I had to figure out what kind of data I am getting from Google Maps, and how to best deserialize the data in a class that I would find comfortable to use in my solution. Everything else — the Flow, SharePoint list, and Power BI solution is a point-and-click exercise, which allows for an easy trial and error while validating the results.

The post Building a Location Beacon Using Microsoft Flow, Power BI, and Azure Functions appeared first on Petri.

This company is turning FAQs into Alexa skills

The content below is taken from the original (This company is turning FAQs into Alexa skills), to continue reading please visit the site. Remember to respect the Author & Copyright.

People looking for an easier path to integrating with Amazon’s Alexa virtual assistant have good news on the horizon. NoHold, a company that builds services for making bots, unveiled a project that seeks to turn a document into an Alexa skill.

It’s designed for situations like Airbnb hosts who want to give guests a virtual assistant that can answer questions about the home they’re renting, or companies that want a talking employee handbook. Bot-builders upload a document to NoHold’s Sicura QuickStart service, which then parses the text and turns it into a virtual conversation partner that can answer questions based on the file’s contents.

Right now, building Alexa skills is a fairly manual process that requires programming prowess and time to figure out Amazon’s software development tools for its virtual assistant. People who want to change the way that a bot behaves have to go in and tweak code parameters.

“Right now, if there has been a little bit of a pushback on bots and virtual assistants, it’s because they tend to be gimmicky,” NoHold CEO Diego Ventura said. “They’re either born out of mashing up existing APIs, like the typical bot that just gives you the weather, or they’re born out of somebody patiently having to specify every single utterance that somebody may use. That’s not scalable.”

In contrast, a bot made with QuickStart can be changed by uploading a new version of the document that spawned it. Based on a demo Ventura gave using an Echo in NoHold’s office, that same capability will apply to the company’s Alexa integration.

However, NoHold’s work still requires Amazon to make some changes to its development tools before the new capabilities can be made broadly available. Ventura is hoping the company will make the requisite changes in the near future.

The company’s goal for this feature is to allow QuickStart users to turn a bot they create into a skill that other people can download from the Alexa marketplace.

In the meantime, people can still try out QuickStart’s text bot creation capabilities for free. NoHold also offers a pro version of QuickStart that supports customizing how the resulting bot looks.

Join the Network World communities on

Facebook

and

LinkedIn

to comment on topics that are top of mind.

Here’s how Microsoft is helping companies build IoT hardware

The content below is taken from the original (Here’s how Microsoft is helping companies build IoT hardware), to continue reading please visit the site. Remember to respect the Author & Copyright.

One of the biggest challenges with building connected hardware is getting from proof-of-concept (PoC) prototypes to devices that are ready for large-scale production rollout. Microsoft is aiming to help through labs that allow companies to come in and work with experts on building internet-connected hardware.

Companies come into one of three Microsoft Internet of Things and Artificial Intelligence (IoT/AI) Insider Labs with the hardware they’ve built so far and a plan for an intense two or three weeks of work. Visitors are paired with mentors who are experts in different areas and given access to machinery that can help them quickly work through different hardware designs.

The goal of the labs is to shave months off companies’ development timeline. Microsoft also provides help with configuring its cloud services for use with the hardware developed in the labs, which is how the tech titan benefits from the program.

There’s a need for programs like this because of the challenges of building connected hardware. Creating new IoT devices is often difficult, according to Dima Tokar, the co-founder and CTO of IoT analysis firm MachNation.

“Today, many enterprises are stuck in the PoC phase of their IoT journey because they are finding that it is more challenging than they expected to take that PoC and make it production-grade,” he said in an email. “It’s difficult because all components of an IoT solution need to scale, and figuring out the IoT security posture as well as management and oversight requirements can be a daunting task.”

Squeezing months of work into weeks of lab time

A recurring theme among lab administrators and participants is that the environment Microsoft created lets teams get done in a matter of weeks what might have otherwise taken months. The IoT/AI Labs include machines that can help with rapid prototyping of hardware, including the design and testing of printed circuit boards (PCBs).

Testing PCB designs with a contract manufacturer can be a time-consuming process. Companies have to send their design off, then wait for the manufacturing run and subsequent shipping. In contrast, Microsoft’s labs are capable of cranking out at least two iterations of a circuit board in a single day, according to Cyra Richardson, a general manager of business development for IoT at the company.

Another benefit to visitors are the labs’ dedicated engineers, who are there to help work through problems in person. Richardson pointed out that seeking answers online for engineering problems doesn’t always beget definitive or useful answers.

“Imagine if you had someone who cared about you were doing, and really was invested in your acceleration,” she said. “And imagine if they were right there in front of you.”

Microsoft has also partnered with a set of ecosystem partners, including Cisco, Dassault and Seeed Studio to help lab participants with areas outside of the Redmond company’s expertise.

“When you think about these new intelligent systems, these new products, you have multiple elements that need to come together. Hardware, industrial design, software, cloud services, all the way through to natural user interface for these products,” Kevin Dallas, Microsoft’s corporate vice president for IoT and intelligent cloud business development, said in an interview. “It’s quite a lot of technology, and it’s easy to get lost in that technology in terms of delivering backend products.”

Entering the labs

Getting into the labs requires an online application, which asks for information about what a company is working on, the size of their team, and other details. Microsoft doesn’t charge for a visit to the Insider Labs, but the company does handpick the teams that get to participate.

One such firm was New Sun Road, a company based in Berkeley, California, that builds solar power systems for the developing world.

“We basically needed to fill some holes,” NSR co-founder Jalel Sager said. “Not just in the cloud platform, but also around some of those IoT devices that would take that information up to the cloud.”

In NSR’s case, the staff at the labs helped in a variety of different product areas, including circuit board design and picking out which Microsoft cloud services made the most sense. In the event NSR encountered a snag that the lab engineers couldn’t solve, it was possible for them to get ahold of a member of the Microsoft product team who could answer their question.

The team at Sarcos Robotics came into the IoT/AI labs with the aim of getting their forthcoming set of robots hooked up to the cloud. The company was previously a part of Raytheon, and has plenty of experience building human-controlled robots. Ben Wolff, the company’s chairman and CEO, said that Sarcos went to the Redmond lab for help with getting its Guardian S robot connected to Azure.

“Historically, we have not collected any data from the sensors [on a robot] or used the sensor data in any way other than to just control the real-time operation of the robot,” he said.

In his view, changing technology means Sarcos customers can benefit from using sensor data to monitor a robot’s performance as well as get information about the environment it’s in. Work that the Sarcos team did with Microsoft will allow the robot to be used as a platform for collecting sensor data and bringing it into the cloud for further analysis.

Engagement with the labs doesn’t stop after a company leaves the building. Participants in the program are able to reach back out to Microsoft for more help if they need it, and the program does allow repeat visits.

Into the future

At the moment, Microsoft has three IoT/AI labs. One is on the company’s campus in Redmond, Washington, and the others are in Shenzhen, China and Munich, Germany. Dallas said that he’s happy with the number of locations currently available for the moment, but plans to expand the program in the future.

Microsoft is measuring the success of the labs based on how many companies are able to bring their products to market faster as a result of their engagement.

That’s how the company will see a benefit from the program, after all: The major revenue opportunity for Microsoft Azure comes from having many of these IoT devices deployed in the world and driving consumption of cloud services.

Hardware requires a long lead time, so it could be months if not years before IoT/AI Labs graduates have products available for purchase.

The labs are open for any interested organization to apply, if they think they’re a good fit for the program. Dallas said Microsoft selects companies based on whether the lab can reduce their time to market, and if the company can add unique value to the product in development.

While Azure usage is a clear benefit to Microsoft, entering the lab doesn’t require a commitment to use the company’s cloud platform, according to Richardson.

Microsoft is also using feedback from customers who come through the labs to refine its products going forward.

Join the Network World communities on

Facebook

and

LinkedIn

to comment on topics that are top of mind.

Review: Centre For Computing History

The content below is taken from the original (Review: Centre For Computing History), to continue reading please visit the site. Remember to respect the Author & Copyright.

With almost everything that contains a shred of automation relying on a microcontroller these days, it’s likely that you will own hundreds of microprocessors beside the obvious ones in your laptop or phone. Computing devices large and small have become such a part of the fabric of our lives that we cease to see them, the devices and machines they serve just work, and we get on with our lives.

It is sometimes easy to forget then how recent an innovation they are.  If you were born in the 1960s for example, computers would probably have been something spoken in terms of the Space Race or science fiction, and unless you were lucky you would have been a teenager before seeing one in front of you.

Having seen such an explosive pace of development in a relatively short time, it has taken the historians and archivists a while to catch up. General museums have been slow to embrace the field, and specialist museums of computing are still relative infants in the heritage field. Computers lend themselves to interactivity, so this is an area in which the traditional static displays that work so well for anthropological artifacts or famous paintings do not work very well.

There's the unobtrusive sign by the level crossing, Cambridge's version of the black mailbox.
There’s the unobtrusive sign by the level crossing, Cambridge’s version of the black mailbox.

Tucked away next to a railway line behind an industrial estate in the city of Cambridge, UK, is one of the new breed of specialist computer museum. The Centre for Computing History houses a large collection of vintage hardware, and maintains much of it in a running condition ready for visitors to experiment with.

Finding the museum is easy enough if you are prepared to trust your mapping application. It’s a reasonable walk from the centre of the city, or for those brave enough to pit themselves against Cambridge’s notorious congestion there is limited on-site parking. You find yourself winding through an industrial park past tile warehouses, car-parts shops, and a hand car wash, before an unobtrusive sign next to a railway level crossing directs you to the right down the side of a taxi company. In front of you then is the museum, in a large industrial unit.

Pay your entrance fee at the desk, Gift Aid it using their retro green screen terminal application if you are a British taxpayer, and you’re straight into the exhibits. Right in front of you surrounding the café area is something you may have heard of if you are a Hackaday reader, a relatively recent addition to the museum, the Megaprocessor.

The Megaprocessor, playing Tetris
The Megaprocessor, playing Tetris

If we hadn’t already covered it in some detail, the Megaprocessor would be enough for a long Hackaday article in its own right. It’s a 16-bit processor implemented using discrete components, around 42,300 transistors and a LOT of indicator LEDs, all arranged on small PCBs laid out in a series of large frames with clear annotations showing the different functions. There is a whopping 256 bytes of RAM, and its clock speed is measured in the KHz. It is the creation of [James Newman], and his demonstration running for visitors to try is a game of Tetris using the LED indicators on the RAM as a display.

To be able to get so up close and personal with the inner workings of a computer is something few who haven’t seen the Megaprocessor will have experienced. There are other computers with lights indicating their innermost secrets such as the Harwell Dekatron, but only the Megaprocessor has such a clear explanation and block diagram of every component alongside all those LED indicators. When it’s running a game of Tetris it’s difficult to follow what is going on, but given that it also has a single step mode it’s easy to see that this could be a very good way to learn microprocessor internals.

The obligatory row of BBC Micros.
The obligatory row of BBC Micros.

The first room off the café contains a display of the computers used in British education during the 1980s. There is as you might expect a classroom’s worth of Acorn BBC Micros such as you would have seen in many schools of that era, but alongside them are some rarer exhibits. The Research Machines 380Z, for example, an impressively specified Z80-based system from Oxford that might not have the fame of its beige plastic rival, but that unlike the Acorn was the product of a company that survives in the education market to this day. And an early Acorn Archimedes, a computer which though you may not find it familiar you will certainly have heard of the processor that it debuted. Clue: The “A” in “ARM” originaly stood for “Acorn”.

The LaserDisc system, one you won't have at home.
The LaserDisc system, one you won’t have at home.

The rarest exhibit in this froom though concerns another BBC Micro, this time the extended Master System. Hooked up to it is an unusual mass storage peripheral that was produced in small numbers only for this specific application, a Philips LaserDisc drive. This is one of very few surviving functional Domesday Project systems, an ambitious undertaking from 1986 to mark the anniversary of the Norman Domesday Book in which the public gathered multimedia information to be released on this LaserDisc application. Because of the rarity of the hardware this huge effort swiftly became abandonware, and its data was only saved for posterity in the last decade.

The main body of the building houses the bulk of the collection. Because this is a huge industrial space, the effect is somewhat overwhelming, as though the areas are broken up by some partitions you are immediately faced with a huge variety of old computer hardware.

The largest part of the hall features the museum’s display of home computers from the 1980s and early 1990s. On show is a very impressive collection of 8-bit and 16-bit micros, including all the ones we’d heard of and even a few we hadn’t. Most of them are working, turned on, and ready to go, and in a lot of cases their programming manual is alongside ready for the visitor to sit down and try their hand at a little BASIC. There are so many that listing them would result in a huge body of text, so perhaps our best bet instead is to treat you to a slideshow (click, click).

Click to view slideshow.

Definitely not Pong, oh no.
Definitely not Pong, oh no.

Beyond the home micros, past the fascinating peek into the museum’s loading bay, and there are a selection of arcade cabinets and then a comprehensive array of games consoles. Everything from the earliest Pong clones to the latest high-powered machines with which you will no doubt be familiar is represented, so if you are of the console generation and the array of home computers left you unimpressed, this section should have you playing in no time.

One might be tempted so far to believe that the point of this museum is to chart computers as consumer devices and in popular culture, but as you reach the back of the hall the other face of the collection comes to the fore. Business and scientific computing is well-represented, with displays of word processors, minicomputers, workstations, and portable computing.

The one that started it all
The one that started it all

On a pedestal in a Perspex box all of its own is something rather special, a MITS Altair 8800, and a rare example for UK visitors of the first commercially available microcomputer. Famously its first programming language was Microsoft BASIC, this machine can claim to be that from which much of what we have today took its start.

In the corner of the building is a small room set up as an office of the 1970s, a sea of wood-effect Formica with a black-and-white TV playing period BBC news reports. They encourage you to investigate the desks as well as the wordprocessor, telephone, acoustic coupler, answering machine and other period items.

UK phone afficionados would probably point out that office phones were rearely anything but black.
UK phone aficionados would probably point out that office phones were rarely anything but black.

The museum has a small display of minicomputers, with plenty of blinkenlight panels to investigate even if they’re not blinking. On the day of our visit one of them had an engineer deep in its internals working on it, so while none of them were running it seems that they are not just static exhibits.

Finally, at various points around the museum were cabinets with collections of related items. Calculators, Clive Sinclair’s miniature televisions, or the evolution of the mobile phone. It is these subsidiary displays that add the cherry to the cake in a museum like this one, for they are much more ephemeral than many of the computers.

This is one of those museums with so many fascinating exhibits that it is difficult to convey the breadth of its collection in the space afforded by a Hackaday article.

There is an inevitable comparison to be made between this museum and the National Museum of Computing at Bletchley Park that we reviewed last year. It’s probably best to say that the two museums each have their own flavours, while Bletchley has more early machines such as WITCH or their Colossus replica as well as minis and mainframes, the Centre for Computing History has many more microcomputers as well as by our judgement more computers in a running and usable condition. We would never suggest a one-or-the-other decision, instead visit both. You won’t regret it.

The Centre for Computing History can be found at Rene Court, Coldhams Road, Cambridge, CB1 3EW. They are open five days a week from Wednesday through to Sunday, and seven days a week during school holidays. They open their doors at 10 am and close at 5 pm, with last admissions at 4 pm. Entry is £8 for grown-ups, and £6 for under-16s. Under-5s are free. If you do visit and you are a UK tax payer, please take a moment to do the gift aid thing, they are after all a charity.

Filed under: Featured, History, reviews

Incognito Mode

The content below is taken from the original (Incognito Mode), to continue reading please visit the site. Remember to respect the Author & Copyright.

They're really the worst tech support team. And their solutions are always the same. "This OS X update broke something." "LET'S INFILTRATE APPLE BY MORPHING APPLES!"

Cloud Standards Customer Council Publishes API Management Reference Architecture

The content below is taken from the original (Cloud Standards Customer Council Publishes API Management Reference Architecture), to continue reading please visit the site. Remember to respect the Author & Copyright.




 

The Cloud Standards Customer Council (CSCC) has published a new whitepaper, Cloud Customer Architecture for API Management, that describes the architecture elements and capabilities of an effective API Management Platform. The architectural capabilities described in the document can be used to instantiate an API runtime and management environment using private, public or hybrid cloud deployment models. The new reference architecture is available for download here: http://bit.ly/2ozBNBj

An Application Programming Interface or “API” is useful because it exposes a business’ defined assets, data, or services for public consumption. APIs allow companies to open up data and services to external third party developers, business partners, and internal departments within their organization to create innovative channel applications. The reuse of core business assets enables digital transformation. An effective API management Platform will provide a layer of controlled and secure self-service access to the APIs.

The Cloud Customer Architecture for API Management addresses:

  • The value proposition of adopting a long-term API strategy
  • The considerations for selecting a solid API Management Platform
  • The comprehensive lifecycle approach to creating, running, managing and securing APIs
  • The multiple personas and stakeholders in API Management and their roles
  • The architectural components and capabilities that make up a superior API Management Platform
  • Runtime characteristics and deployment considerations

The CSCC will host a complimentary webinar on Tuesday, April 4 from 11:00am – 12:00pm ET to introduce the paper. Additional information and a link to register is posted to http://bit.ly/2oc0vLU.

Is it on AWS? Domain Identification Using AWS Lambda

The content below is taken from the original (Is it on AWS? Domain Identification Using AWS Lambda), to continue reading please visit the site. Remember to respect the Author & Copyright.

In the guest post below, my colleague Tim Bray explains how he built IsItOnAWS.com . Powered by the list of AWS IP address ranges and using a pair of AWS Lambda functions that Tim wrote, the site aims to tell you if your favorite website is running on AWS.

Jeff;


Is it on AWS?
I did some recreational programming over Christmas and ended up with a little Lambda function that amused me and maybe it’ll amuse you too. It tells you whether or not a given domain name (or IP address) (even IPv6!) is in the published list of AWS IP address ranges. You can try it out over at IsItOnAWS.com. Part of the construction involves one Lambda function creating another.

That list of of ranges, given as IPv4 and IPv6 CIDRs wrapped in JSON, is here; the how-to documentation is here and there’s a Jeff Barr blog. Here are a few lines of the “IP-Ranges” JSON:

{
  "syncToken": "1486776130",
  "createDate": "2017-02-11-01-22-10",
  "prefixes": [
    {
      "ip_prefix": "13.32.0.0/15",
      "region": "GLOBAL",
      "service": "AMAZON"
    },
    ...
  "ipv6_prefixes": [
    {
      "ipv6_prefix": "2400:6500:0:7000::/56",
      "region": "ap-southeast-1",
      "service": "AMAZON"
    },

As soon as I saw it, I thought “I wonder if IsItOnAWS.com is available?” It was, and so I had to build this thing. I wanted it to be:

  1. Serverless (because that’s what the cool kids are doing),
  2. simple (because it’s a simple problem, look up a number in a range of numbers), and
  3. fast. Because well of course.

Database or Not?
The construction seemed pretty obvious: Simplify the IP-Ranges into a table, then look up addresses in it. So, where to put the table? I thought about Amazon DynamoDB, but it’s not obvious how best to search on what in effect is a numeric range. I thought about SQL databases, where it is obvious, but note #2 above. I thought about Redis or some such, but then you have to provision instances, see #1 above. I actually ended up stuck for a few days scratching my head over this one.

Then a question occurred to me: How big is that list of ranges? It turns out to have less than a thousand entries. So who needs a database anyhow? Let’s just sort that JSON into an array and binary-search it. OK then, where does the array go? Amazon S3 would be easy, but hey, look at #3 above; S3’s fast, but why would I want it in the loop for every request? So I decided to just generate a little file containing the ranges as an array literal, and include it right into the IsItOnAWS Lambda function. Which meant I’d have to rebuild and upload the function every time the IP addresses change.

It turns out that if you care about those addresses, you can subscribe to an Amazon Simple Notification Service (SNS) topic that will notify you whenever it changes (in my recent experience, once or twice a week). And you can hook your subscription up to a Lambda function. With that, I felt I’d found all the pieces anyone could need. There are two Lambda functions: the first, newranges.js, gets the change notifications, generates the JavaScript form of the IP-Ranges data, and uploads a second Lambda function, isitonaws.js, which includes that JavaScript. Vigilant readers will have deduced this is all with the Node runtime.

The new-ranges function, your typical async/waterfall thing, is a little more complex than I’d expected going in.

Postmodern IP Addresses
Its first task is to fetch the IP-Ranges, a straightforward HTTP GET. Then you take that JSON and smooth it out to make it more searchable. Unsurprisingly, there are both IPv4 and IPv6 ranges, and to make things easy I wanted to mash ’em all together into a single array that I could search with simple string or numeric matching. And since IPv6 addresses are way too big for JavaScript numbers to hold, they needed to be strings.

It turns out the way the IPv4 space embeds into IPv6’s ("::ffff:0:0/96") is a little surprising. I’d always assumed it’d be like the BMP mapping into the low bits of Unicode. I idly wonder why it’s this way, but not enough to research it.

The code for crushing all those CIDRs together into a nice searchable array ended up being kind of brutish, but it gets the job done.

Building Lambda in Lambda
Next, we need to construct the lambda that’s going to actually handle the IsItOnAWS request. This has to be a Zipfile, and NPM has tools to make those. Then it was a matter of jamming the zipped bytes into S3 and uploading them to make the new Lambda function.

The sharp-eyed will note that once I’d created the zip, I could have just uploaded it to Lambda directly. I used the S3 interim step because I wanted to to be able to download the generated “ranges” data structure and actually look at it; at some point I may purify the flow.

The actual IsItOnAWS runtime is laughably simple, aside from a bit of work around hitting DNS to look up addresses for names, then mashing them into the same format we used for the ranges array. I didn’t do any HTML templating, just read it out of a file in the zip and replaced an invisible <div> with the results if there were any. Except for, I got to code up a binary search method, which only happens once a decade or so but makes me happy.

Putting the Pieces Together
Once I had all this code working, I wanted to connect it to the world, which meant using Amazon API Gateway. I’ve found this complex in the past, but this time around I plowed through Create an API with Lambda Proxy Integration through a Proxy Resource, and found it reasonably linear and surprise-free.

However, it’s mostly focused on constructing APIs (i.e. JSON in/out) as opposed to human experiences. It doesn’t actually say how to send HTML for a human to consume in a browser, but it’s not hard to figure out. Here’s how (from Node):

context.succeed({
  "statusCode": 200,
  "headers": { "Content-type": "text/html" },
  "body": "<html>Your HTML Here</html>"
});

Once I had everything hooked up to API Gateway, the last step was pointing isitonaws.com at it. And that’s why I wrote this code in December-January, but am blogging at you now. Back then, Amazon Certificate Manager (ACM) certs couldn’t be used with API Gateway, and in 2017, life is just too short to go through the old-school ceremony for getting a cert approved and hooked up. ACM makes the cert process a real no-brainer. What with ACM and Let’s Encrypt loose in the wild, there’s really no excuse any more for having a non-HTTPS site. Both are excellent, but if you’re using AWS services like API Gateway and CloudFront like I am here, ACM is a smoother fit. Also it auto-renews, which you have to like.

So as of now, hooking up a domain name via HTTPS and CloudFront to your API Gateway API is dead easy; see Use Custom Domain Name as API Gateway API Host Name. Worked for me, first time, but something to watch out for (in March 2017, anyhow): When you get to the last step of connecting your ACM cert to your API, you get a little spinner that wiggles at you for several minutes while it hooks things up; this is apparently normal. Fortunately I got distracted and didn’t give up and refresh or cancel or anything, which might have screwed things up.

By the way, as a side-effect of using API Gateway, this is all running through CloudFront. So what with that, and not having a database, you’d expect it to be fast. And yep, it sure is, from here in Vancouver anyhow. Fast enough to not bother measuring.

I also subscribed my email to the “IP-Ranges changed” SNS topic, so every now and then I get an email telling me it’s changed, and I smile because I know that my Lambda wrote a new Lambda, all automatic, hands-off, clean, and fast.

Tim Bray, Senior Principal Engineer

 

Restoration of deleted Office 365 Groups is now launched.

The content below is taken from the original (Restoration of deleted Office 365 Groups is now launched.), to continue reading please visit the site. Remember to respect the Author & Copyright.

This little gem snuck through recently, the ability to restore a deleted Office 365 Group.

Office 365 Groups pose all administrators a "challenge", no matter what your business size, and this is a welcome addition to the options admins have for supporting Groups.

http://bit.ly/2niPpk0

CloudJumper Launches JumpStart for Fast Ground to Cloud Customer Onboarding to WaaS

The content below is taken from the original (CloudJumper Launches JumpStart for Fast Ground to Cloud Customer Onboarding to WaaS), to continue reading please visit the site. Remember to respect the Author & Copyright.




 

CloudJumper, a Workspace as a Service (WaaS) platform innovator for agile business IT, today announced JumpStart, a seamless ground-to-cloud onboarding solution for managed service providers (MSPs), cloud service providers (CSPs), telcos, agents and other IT solution providers that deliver WaaS. The new client onboarding technology grants WaaS providers the ability to provision nWorkSpace environments quickly and easily by scanning the customer IT environment, mirroring customer landscape data within the cloud-based workspace, and provisioning new WaaS accounts with unprecedented speed and accuracy.

The JumpStart onboarding solution is integrated with CloudJumper’s my.CloudJumper partner portal. When new clients are onboarded using JumpStart, data from the client’s physical computing environment is loaded into my.CloudJumper. The solution provides the partner with a seamless end-to-end experience regardless of the size or complexity of the IT environment. The solution provides orchestration of on-premise discovery, price quoting, and the provisioning of complete IT workspaces in the cloud. Until the availability of JumpStart, MSPs and IT solution providers conducted new client onboarding using a limited toolset, requiring hours or days to process. With JumpStart, the ground-to-cloud solution eliminates manual onboarding procedures through automation, saving precious staff time and financial resources.

JumpStart also opens new business opportunities for nWorkSpace partners as the solution allows partners to increase their total addressable market (TAM). JumpStart enables remote onboarding where onsite staff at the customer location are no longer required in order to complete and monitor a deployment. The solution’s centralized onboarding technology enables MSPs and other partners that had been focused on limited sales regions to expand operations without the need for additional deployment staff.

Advanced Capabilities in JumpStart Include:

  • Core-to-edge environmental scan of the new WaaS customer IT environment to identify applications and quantify data requirements for the cloud-based workspace.
  • Employee-specific application and data set uploads to replicate the traditional desktop environment.
  • Ability to upload client computing content into the CloudJumper portal, allowing partners to optimize user nodes remotely.
  • Replication of security permissions from the physical IT environment to the WaaS infrastructure to ensure data privacy, and compliance.
  • Thorough automation of all processes for seamless customer onboarding that requires only hours instead of days or weeks.

JumpStart is an exclusive onramp to nWorkSpace, CloudJumper’s Workspace as a Service platform which includes all of the software, infrastructure, and services necessary to quickly and easily deliver business-class WaaS. The solution includes a high volume provisioning system for the fast onboarding of customers with a simple management control panel to maintain complete oversight of customer accounts under management. nWorkSpace is exceptional in its scalability, permitting customers to scale their services based on current employee count or operational requirements. Finally, the high level of automation allows CloudJumper channel partners to shift their resources from the management of the platform to more important strategic activities such as revenue generation.

“With a growing number of diverse customers across a wide range of industries and locations, leveraging JumpStart to rapidly and accurately onboard new WaaS subscribers will help to control management costs, while simplifying the process for clients,” said Tony Schwartz, president of Halo Information Systems. “A customer with several regional or national locations would traditionally require an onsite contingent to ensure a proper installation. With JumpStart, that burden would be lifted as much of the migration to the cloud would be done remotely through a centralized interface. It is a compelling solution and we expect that JumpStart will be well-received by CloudJumper’s channel.”

“JumpStart takes WaaS deployment speed to the next level with ground-to-cloud deployment acceleration. JumpStart is an important option that extends the value proposition of WaaS to minimize time-intensive onboarding processes,” said Max Pruger, chief sales officer, CloudJumper. “The solution overcomes many of the installation challenges that are commonly found with legacy products which slow time to revenue for the IT solution provider. We invite MSPs, CSPs, ISVs and others that are interested in this fast growing segment of the cloud to learn more about our solution portfolio.”

Pricing and Availability
JumpStart is available free of charge to existing nWorkSpace partners and will become generally available in the second quarter of 2017.

3G to WiFi Bridge Brings the Internet

The content below is taken from the original (3G to WiFi Bridge Brings the Internet), to continue reading please visit the site. Remember to respect the Author & Copyright.

[Afonso]’s 77-year-old grandmother lives in a pretty remote location, with only AM/FM radio reception and an occasionally failing landline connecting her to the rest of the world. The nearest 3G cell tower is seven kilometers away and unreachable with a cell phone. But [Afonso] was determined to get her up and running with video chats to distant relatives. The solution to hook granny into the global hive mind? Build a custom antenna to reach the tower and bridge it over to local WiFi using a Raspberry Pi.

The first step in the plan was to make sure that the 3G long-shot worked, so [Afonso] prototyped a fancy antenna, linked above, and hacked on a connector to fit it to a Huawei CRC-9 radio modem. This got him a working data connection, and it sends a decent 4-6 Mbps, enough to warrant investing in some better gear later. Proof of concept, right?

On the bridging front, he literally burned through a WR703N router before slapping a Raspberry Pi into a waterproof box with all of the various radios. The rest was a matter of configuration files, getting iptables to forward the 3G radio’s PPP payloads over to the WiFi, and so on. Of course, he wants to remotely administer the box for her, so he left a permanent SSH backdoor open for administration. Others of you running remote Raspberry Pis should check this out.

We think it’s awesome when hackers take connectivity into their own hands. We’ve seen many similar feats with WiFi, and indeed [Afonso] had previously gone down that route with a phased array of 24 dBi dishes. In the end, the relatively simple 3G Pi-and-Yagi combo won out.

Part two of the project, teaching his grandmother to use an Android phone, is already underway. [Afonso] reports that after running for two weeks, she already has an Instagram account. We call that a success!

Filed under: Raspberry Pi, wireless hacks

Save up to 50 percent on Windows Server VMs

The content below is taken from the original (Save up to 50 percent on Windows Server VMs), to continue reading please visit the site. Remember to respect the Author & Copyright.

Now is the time to move to Azure and reap the rewards of cloud technology including the ability to scale up or down quickly, pay only for what you use, and save on compute power. Whether you’re moving a few workloads, migrating your datacenter, or deploying new virtual machines (VMs) as part of your hybrid cloud strategy, the Azure Hybrid Use Benefit provides big savings as you move to the cloud.

 

Azure HUB allows Partners with Software Assurance (SA) on their Windows Server licenses to run virtual machines in Azure with up to 50% cost savings. For example, the annual cost of 2 Azure VMs is $4,393 USD – with AHUB the price is just $2,063 USD.* This makes it easier for Partners to have a new way to offer more services without raising overall project costs.

 

This is not the only exciting change to Software Assurance. We are pleased to announce the general availability of Windows Server Premium Assurance and SQL Server Premium Assurance, which provide Customers an incremental six years of extended support for these products, whether on-premises or in the cloud.

 

Learn more today about the Azure Hybrid Use Benefit and Premium Assurance.

 

Questions? Feedback? Please post them here!

 

*Savings based on two D2V2 virtual machines in US East 2 Region running 744 hours/month for 12 months. Base compute rate at SUSE Linux Enterprise rate for US East 2. Software Assurance cost (Level A) for Windows Server standard edition (one 2-proc license or a set of 16 core licenses). Savings may vary based on location, instance type, or usage. Prices as of March 2017. Prices subject to change.

Cloud Foundry launches its developer certification program

The content below is taken from the original (Cloud Foundry launches its developer certification program), to continue reading please visit the site. Remember to respect the Author & Copyright.

Cloud Foundry, a massive open source project that allows enterprises to host their own platform-as-a-service for running cloud applications in their own data center or in a public cloud, today announced the launch of its “Cloud Foundry Certified Developer” program.

The Cloud Foundry Foundation calls this “the world’s largest cloud-native developer certification initiative,” and while we obviously still have to wait and see how successful this initiative will be, it already has the backing of the likes of Dell EMC, IBM, SAP and Pivotal (the commercial venture that incubated the Cloud Foundry project). The company is partnering with the Linux Foundation to deliver the program through its eLearning infrastructure.

The idea here is to allow both experienced and novice developers demonstrate their open source cloud skills. The program will focus on all of the major public cloud platforms that currently offer Cloud Foundry support, including those from Huawei, IBM, Pivotal, SAP and Swisscom.

The $300 exam itself, which should take about four hours to finish, will cover the Cloud Foundry basics, cloud-native application security, application management and container management, and will test developers on their ability to modify simple Java, Node.js and Ruby applications. While this represents a pretty wide swath of topics, developers who can show competency in all of these areas surely won’t have an issue finding a job quickly.

“Companies need developers with the skills to build and manage cloud-native applications, and developers need jobs,” Cloud Foundry CTO Chip Childers notes in today’s announcement. “We pinpointed this growing gap in the industry and recognized our opportunity to give both developers and enterprises what they need.”

The launch of this program doesn’t necessarily come as a major surprise. Childers already told us this was in the works last November.

The program is currently in its beta phase and will become generally available on June 13 (not coincidentally, that’s also the first day of the Cloud Foundry Summit Silicon Valley and developers will be able to take the test in person at the event).

Featured Image: Bryce Durbin

Adding offerings and UK Region: Azure rolls deep with PCI DSS v3.2

The content below is taken from the original (Adding offerings and UK Region: Azure rolls deep with PCI DSS v3.2), to continue reading please visit the site. Remember to respect the Author & Copyright.

Azure rolls deep with PCI DSS v3.2

Check out our AoC

Go here to download Azure’s Payment Card Industry Data Security Standard (PCI DSS) v3.2  Attestation of Compliance (AoC)! When it comes to enabling customers who want or need to operate in a cloud environment AND also need to adhere to the global standards designed to prevent credit card fraud, they need look no further than Azure. 

Why put off until 2018 what you can do today?

When it comes to security and compliance, we are always ready to act. The DSS v3.2 contains several requirements that don’t take effect until January 2018, and while it is possible to get a v3.2 certification without meeting these future requirements, Azure has already adopted them and is currently compliant with all new requirements!

UK, too!

Azure has also added the UK region to our list of PCI-certified datacenters, while expanding coverage within previously certified regions around the world. A new version of our PCI Responsibility Matrix will be released shortly, keep an eye out for that announcement coming soon.

More services = More options for customers

Azure has again increased the coverage of our attestation to keep up with customer needs and we continue to be unmatched amongst Cloud Service Providers in the depth and breadth of offerings with PCI DSS v3.2. A sample of the services added in this attestation include:

Note: Refer to the latest AoC for the full list of services and regions covered. 

AoC FAQs and Highlights

Why does the AoC say “April 2016”?

The front page and footer of the AoC says “April 2016”.  This is the date that the template was published by the PCI SSC, it is not the date of our AoC.  Many customers get confused by this, but we are not able to modify the AoC template. Refer to page 76 of the AoC for the date it was actually signed and issued. 

How should I interpret the service listing in the AoC?

We have received feedback in the past that it was difficult to understand what services were covered in the AoC. This was mainly because the services were listed under the groupings and internal names our Qualified Security Assessor (QSA) used for the assessment, along with the fact that many services got re-branded shortly after our 2015 AoC was released.

We incorporated that feedback in the release of our 2016 AoC, and have again updated the service listing in the 2017 AoC to reflect the current set of Azure offerings. Please be aware that if an Azure service is re-branded we are not able to retroactively update the AoC.  If you have questions about the status of an Azure service, please contact Azure support or your TAM. 

Why isn’t Azure assessed as a “Shared Hosting Provider”?

The shared hosting provider designation in PCI DSS is for situations where multiple customers are being hosted on a single server, but doesn’t take into account hosting of isolated virtualized environments.  An example of shared hosting is if a service provider was hosting multiple customer websites on a single physical web server. In that situation, there is no segregation between the customer environments. Azure is not considered a shared hosting provider for PCI because customer VMs and environments are segregated and isolated from each other. So changes made to “Customer X’s” VM does not affect “Customer Y’s” VM, even under the scenario that both VMs are hosted on the same physical host.  

Microsoft Azure Hands-on Labs now available at Cloud Academy!

The content below is taken from the original (Microsoft Azure Hands-on Labs now available at Cloud Academy!), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft Azure Hands-on Labs now available at Cloud Academy!

One of the key takeaways from what we learned about how enterprises are using Cloud Academy is that it is a multi-cloud world. We are seeing more and more large enterprises going multi-cloud, and our mission is to give them the right training to help their teams build solutions in the cloud, on several vendors. In this space, Microsoft Azure is definitely the second largest player in the market.

Today, I am excited to announce the general availability of our Microsoft Azure hands-on labs on Cloud Academy.

As with our hands-on labs for Amazon Web Services, we use an internal technology stack to power our labs. This is something that our product team develops internally. We are investing more and more in this technology and we have an extensive roadmap of new features planned for individuals and enterprise users for 2017. Content will be another huge surprise for all of our customers as we are planning to double down on the number of new labs that we will publish month to month.

How customers use hands-on labs at Cloud Academy

First of all, our lab technology provides our customers with a full and dedicated AWS or Azure environment. This means that you learn how to do things directly in a real AWS or Azure console. Cloud Academy creates a dedicated instance for you and makes sure that you can use our step-by-step guidance to learn through real use cases. Each hands-on lab has a time limit to complete it, which is usually a sufficient amount of time to complete all of the required tasks.

In 2016, the completion rate for our hands-on labs was more than 70%.

That’s an incredible statistic. Most of our customers use hands-on labs inside our learning paths in order to complete their knowledge of a topic with some practical, hands-on experience. We also have thousands of users who use labs without guidance. In 2017, we are planning to add new functionalities to our platform that will allow our customers to build even more using our hands-on labs. Microsoft Azure is the first platform that we are launching in addition to our Amazon Web Services labs, and we plan to release many labs over the coming months.

Explore our new Microsoft Azure hands-on labs on Cloud Academy!

I’m a computer engineering guy that loves building products. I’m CEO and co-founder of CloudAcademy.com, all my experience is in the webhosting and cloud computing industry where I started other companies before. I love talking with our members and readers, feel free to email me at [email protected] !

More PostsWebsite

Follow Me:
TwitterFacebookLinkedInGoogle Plus

Brit telcos will waive early termination fees for military personnel

The content below is taken from the original (Brit telcos will waive early termination fees for military personnel), to continue reading please visit the site. Remember to respect the Author & Copyright.

British military personnel will no longer face swingeing cancellation fees on broadband packages if they are posted abroad.

BT, EE, Plusnet, TalkTalk and Virgin Media have all committed to waiving the fees for serving military personnel who are either posted abroad or to an area of the UK not covered by their existing telly, telephone or internet bundles.

“Armed Forces personnel play a vital role protecting our country, whether serving overseas or stationed away from home in other parts of the UK,” intoned BT chief exec Gavin Patterson in a canned statement. “That’s why we’re committed to ensuring they don’t have to pay for broadband or TV services they can’t access, when they find themselves in this situation.”

“TalkTalk was the first ISP to recognise how tricky this can be and offer free disconnections for service personnel moving overseas, and we’re delighted that the rest of the industry has followed suit,” chimed in chief exec Dido Harding.

The waiver was agreed under the Armed Forces Covenant, a rather nebulous idea promoted by the Ministry of Defence “ensuring that those who serve or have previously served in the Armed Forces, and their families, are treated fairly and not disadvantaged by their service”. Companies are encouraged to sign the covenant and promise not to disadvantage reservists who attend training courses during the working week.

Virgin Media’s cancellation fees for its Full House bundle can run into hundreds of pounds, with the telco’s website giving a sample figure of £217.20 for terminating a contract four months early. Over a similar period, cancelling BT’s Infinity 1 + Unlimited Anytime Calls Package (not including TV services) would cost £93.

The move will be very welcome for military personnel and their families, given that they can be posted anywhere between the northern reaches of Scotland and the Falkland Islands, via various glamorous (and some not-so-glamorous) postings in between. ®

How Azure Security Center helps reveal a Cyberattack

The content below is taken from the original (How Azure Security Center helps reveal a Cyberattack), to continue reading please visit the site. Remember to respect the Author & Copyright.

The Azure Security Center (ASC) analysts team reviews and investigates ASC alerts to gain insight into security incidents affecting Microsoft Azure customers, helping improve Azure Security alerts and detections. ASC helps customers keep pace with rapidly evolving threats by using advanced analytics and global threat intelligence.

Although we have come a long way as far as cloud security is concerned, even today security factors are heavily discussed as companies consider moving their assets to the cloud. The Azure Security Center team understands how critical it is for our customers to be assured that their Azure deployments are secure, not only from advanced attacks but even from the ones that are not necessarily new or novel. The beauty of ASC lies in its simplicity. Although ASC uses machine learning, anomaly detection, and behavioral analysis to determine suspicious events, it still addresses simple things like SQL brute force attacks that Bad Guys/Script Kiddies are using to break into Microsoft SQL servers.

In this blog, we’ll map out the stages of one real-world attack campaign that began with a SQL Brute Force attack, which was detected by the Security Center, and the steps taken to investigate and remediate the attack. This case study provides insights into the dynamics of the attack and recommendations on how to prevent similar attacks in your environment.

Initial ASC alert and details

Hackers are always trying to target internet connected databases. There are tons of bad guys trying to discover IP addresses that have SQL Server running so that they can crack their password through a brute force attack. The SQL database can contain a wealth of valuable information for the attackers, including personally identifiable information, credit card numbers, intellectual property, etc. Even if the database doesn’t have much information, a successful attack on an insecurely configured SQL installation can be leveraged to get full system admin privileges.

Our case started with an ASC Alert notification to the customer detailing malicious SQL activity. A command line “ftp -s:C:\zyserver.txt” launched by the SQL service account was unusual and flagged as by ASC Alerts.

The alert provided details such as date and time of the detected activity, affected resources, subscription information, and included a link to a detailed report of the detected threat and recommended actions.

 

Malicious SQL activity (2) Threat summary

 

Through our monitoring, the ASC analysts team was also alerted to this activity and looked further into the details of the alert. What we discovered was the SQL service account (SQLSERVERAGENT) was creating FTP scripts (i.e.: C:\zyserver.txt), which was used to download and launch malicious binaries from an FTP site.

Detect

The initial compromise

A deeper investigation into the affected Azure subscription began with inspection of the SQL error and trace logs where we found indications of SQL Brute Force attempts. In the SQL error logs, we encountered hundreds of “Audit Login Failed” logon attempts for the SQL Admin ‘sa’ account (built-in SQL Server Administration) which eventually led up to a successful login.

failed log in

These brute force attempts occurred over TCP port 1433, which was exposed on a public facing interface. TCP port 1433 is the default port for SQL Server.

Note: It is a very common recommendation to change the SQL default port 1433, this may impart a “false sensation of security”, because many port scanning tools can scan a “range” of network ports and eventually find SQL listening on ports other than 1433.

Once the SQL Admin ‘sa’ account was compromised by brute force, the account was then used to enable the ‘xp_cmdshell’ extended stored procedure as we’ve highlighted below in a SQL log excerpt.

SQLThe ‘xp_cmdshell’ stored procedure is disabled by default and is of particular interest to attackers because of its ability to invoke a Windows command shell from within Microsoft SQL Server. With ‘xp_cmdshell enabled, the attacker created SQL Agent jobs which invoked ‘xp_cmdshell’ and launched arbitrary commands, including the creation and launch of FTP scripts which, in turn, downloaded and ran malware.

Details of malicious activity

Once we determined how the initial compromise occurred, our team began analyzing Process Creation events to determine other malicious activity. The Process Creation events revealed the execution of a variety of commands, including downloading and installing backdoors and arbitrary code, as well as permission changes made on the system.

Below we have detailed a chronological layout of process command lines that we determined to be malicious:

A day after the initial compromise we began to see the modification of ACLS on files/folders and registry keys with use of Cacls.exe (which appears to have been renamed to osk.exe and vds.exe).

Note: Osk.exe is the executable for the Accessibility On-Screen Keyboard and Vds.exe is the Virtual Disk Service executable, both typically found on a Windows installation. The command lines and command switches detailed below, however, are not used for Osk.exe or VDS.exe and are associated with Cacls.exe.

The Cacls.exe command switches /e /g is used to grant the System account full(:f) access rights to ‘cmd.exe’ and ‘net.exe’.

Screenshot_2A few seconds later, we see the termination of known Antivirus Software using the Windows native “taskkill.exe”.

Screenshot_1

This was followed by the creation of and FTP script (c:\zyserver.txt ) which was flagged in the original ASC Alert. This FTP script appears to download malware (c:\stserver.exe) from a malicious FTP site and subsequently launch the malware.

Image 3A few minutes later, we see the “net user” and “net localgroup” commands used to accomplish the following:

a.    Activate the built-in guest account and add it to the Administrators group

b.   Create a new user account and add the newly created user to the Administrators group

Image 4A little over 2 hours later, we see the regini.exe command which appears to be used to create, modify, or delete registry keys. Regini can also set permissions on the registry keys as defined in the noted .ini file. We then see, regsvr32.exe silently (/s switch) registering dlls related to the Windows shell (urlmon.dll, shdocvw.dll) and Windows scripting (jscript.dll, vbscript.dll, wshom.ocx).

Screenshot_3

This is immediately followed by additional modification of permissions on various Windows executables. Essentially resetting each to default with the “icacls.exe” command.

Note: The /reset switch replaces ACLs with default inherited ACLs for all matching files.

Image 6Lastly, we observed the deletion of “Terminal Server” fDenyTSConnections registry key. This is a registry key that contains the configuration of Terminal Server connection restrictions. This led us to believe that malicious RDP connections may be the next step for the attacker to access the server. Inspection of logon events did not reveal to us any malicious RDP attempts or connections, however:

  • Disabling of Terminal Server connection restrictions by overwriting values in the “Terminal Server” registry key
    reg.exe ADD "HKLM\SYSTEM\CurrentControlSet\Control\Terminal Server" /v fDenyTSConnections /t REG_DWORD /d 00000000 /f" 

We also noticed and Scheduled task created. This task referenced a binary named “svhost.exe” to be launched out of the C:\RECYCLER folder, which is suspicious.

Note that the legitimate “svchost.exe” files located in the “\Windows\System32” and “Windows\sysWOW64”. Svchost.exe running from any other directory should be considered suspicious.

  • Persistence mechanism – Task Scheduler utility (schtasks.exe) used to set a recurring task
    C:\Windows\System32\schtasks.exe /create /tn "45645" /tr "C:\RECYCLER\svchost.exe" /sc minute /mo 1 /ru "system 

Recommended remediation and mitigation steps

Once we understood the extent and the details of the attack, we recommended the following remediation and mitigation steps to be taken.

First, if possible, we first recommended the backup and rebuild the SQL Server and reset all user accounts. We then implement the following mitigation steps to help prevent further attacks.

1. Disable ‘sa’ account and use the more secure Windows Authentication

To disable ‘sa’ login via SQL, run the following commands as a sys admin

ALTER LOGIN sa DISABLE

GO

2. To help prevent attackers from guessing the ‘sa’ account, rename the ‘sa’ account
To rename the ‘sa’ account via SQL, run the following as a sys admin:

ALTER LOGIN sa WITH NAME = [new_name];

GO

3. To prevent future brute force attempts, change and harden the ‘sa’ password and set the sa Login to ‘Disabled’.

Learn how to verify and change the system administrator password in MSDE or SQL Server 2005 Express Edition.

4. It’s also a good idea to ensure that ‘xp_cmdshell’ is disabled. Again, note that this should be disabled by default.

5. Block port TCP port 1433 if it is not needed be opened to the internet. From your Azure Portal, take the following steps to configure a Rule to block 1433 in Network Security Group

a. Open the Azure portal

b. Navigate to > (More Services) -> Network security groups

c. If you have opted into the Network Security option, you will see an entry for <ComputerName-nsg> — click it to view your Security Rules

d. Under Settings click "Inbound security rules" and then click +Add on the next pane

e. Enter the Rule name and Port information.Under the ‘Service’ pulldown, choose MS SQL and it will automatically select Port range = 1433 as detailed below.

f. Then apply the newly created rule to the subscription

6. Inspect all stored procedures that may have been enabled in SQL and look for stored procedures that may be implementing ‘xp_cmdshell’ and running unusual command.

For example, in our case, we identified the following commands:

7. Lastly, we highly recommend configuring Azure subscription(s) to receive future alerts and email notifications from Microsoft Azure Security Center. To receive alerts and email notifications of security issues like this in the future, we recommended upgrading from ASC “Free” (basic detection) tier to ASC “Standard” (advanced detection) tier.

Below is an example of the email alert received from ASC when this SQL incident was detected:

Learn more about SQL detection

AWS Launches Cloud-Based Contact Center Amazon Connect

The content below is taken from the original (AWS Launches Cloud-Based Contact Center Amazon Connect), to continue reading please visit the site. Remember to respect the Author & Copyright.

Brought to You by Talkin’ Cloud

Amazon knows a thing or two about providing customer support at scale. The e-commerce giant supports customers from more than 75 service locations around the globe.

Now, it’s bringing that knowledge to the masses with Amazon Connect, a self-service, cloud-based contact center based on the same technology used by Amazon’s customer service associates.

AWS says that with Amazon Connect “there is no infrastructure to deploy or manage, so customers can scale their Amazon Connect Virtual Contact Center up or down, onboarding up to tens of thousands of agents in response to business cycles (e.g. short-term promotions, seasonal spikes, or new product launches) and paying only for the time callers are interacting with Amazon Connect plus any associated telephony charges. Amazon Connect’s self-service graphical interface makes it easy for non-technical users to design contact flows, manage agents, and track performance metrics – no specialized skills required.”

The launch of Amazon Connect comes on the heels of the launch of Amazon Chime in February, a cloud-based unified communications service that, similar to Amazon Connect, requires no upfront investment, and will be available starting in Q2 2017.

Customers can design contact flows based on the information retrieved by Amazon Connect from AWS services or third-party systems, and customers can also build natural language contact flows using Amazon Lex, an AI service that uses the same technology that powers Amazon Alexa.

“Ten years ago, we made the decision to build our own customer contact center technology from scratch because legacy solutions did not provide the scale, cost structure, and features we needed to deliver excellent customer service for our customers around the world,” Tom Weiland, Vice President of Worldwide Customer Service, Amazon said in a statement. “This choice has been a differentiator for us, as it is used today by our agents around the world in the millions of interactions they have with our customers. We’re excited to offer this technology to customers as an AWS service – with all of the simplicity, flexibility, reliability, and cost-effectiveness of the cloud.”

Amazon Connect also integrates with a broad set of AWS tools and infrastructure, as well as with leading CRM, analytics and helpdesk offerings, including Salesforce, Freshdesk and Zendesk.

“At Freshdesk, we believe it’s critical to deliver the experience our 100,000+ customers need and expect. We work closely with companies like AWS to build powerful and seamless integrations that enable customers to run their entire business in the cloud,” Francesco Rovetta, VP of Alliances & Distribution at Freshdesk said. “AWS is a natural fit for us: we are both known for our simplicity in design, ease of use and innovative approach in product development. We are proud to support Amazon Connect at launch and look forward to introducing our customers to this amazing new solution.”

“An Amazon Connect and Zendesk integration provides our many shared customers with a seamless experience,” said Sam Boonin, vice president of product strategy at Zendesk. “These companies are placing customer relationships at the forefront of their business and, in return, creating loyal customers.”

Available in the U.S. and 18 countries through Europe immediately, Amazon Connect will expand to more countries in the coming months.

This article originally appeared on Talkin’ Cloud.

One-click disaster recovery of applications using Azure Site Recovery

The content below is taken from the original (One-click disaster recovery of applications using Azure Site Recovery), to continue reading please visit the site. Remember to respect the Author & Copyright.

Disaster recovery is not only about replicating your virtual machines but also about end to end application recovery that is tested multiple times, error free, and stress free when disaster strikes, which are the Azure Site Recovery promises. If you have never seen your application run in Microsoft Azure, chances are that when a real disaster happens, the virtual machines may just boot, but your business may remain down. The importance and complexity involved in recovering applications was described in the previous blog of this series – Disaster recovery for applications, not just virtual machines using Azure Site Recovery. This blog covers how you can use the Azure Site Recovery construct of recovery plans to failover or migrate applications to Microsoft Azure in the most tested and deterministic way, using an example of recovering a real-world application to the public cloud.  

Why use Azure Site Recovery “recovery plans”?

Recovery plans help you plan for a systematic recovery process by creating small independent units that you can manage. These units will typically represent an application in your environment. Recovery plan not only allows you to define the sequence in which the virtual machines start, but also helps you automate common tasks during recovery.

Essentially, one way to check that you are prepared for disaster recovery is by ensuring that every application of yours is part of a recovery plan and each of the recovery plans is tested for recovery to Microsoft Azure. With this preparedness, you can confidently migrate or failover your complete datacenter to Microsoft Azure.
 
Let us look at the three key value propositions of a recovery plan:

  • Model an application to capture dependencies
  • Automate most recovery tasks to reduce RTO
  • Test failover to be ready for a disaster

Model an application to capture dependencies

A recovery plan is a group of virtual machines generally comprising an application that failover together. Using the recovery plan constructs, you can enhance this group to capture your application-specific properties.
 
Let us take the example of a typical three tier application with

  • one SQL backend
  • one middleware
  • one web frontend

The recovery plan can be customized to ensure that the virtual machines come up in the right order post a failover. The SQL backend should come up first, the middleware should come up next, and the web frontend should come up last. This order makes certain that the application is working by the time the last virtual machine comes up. For example, when the middleware comes up, it will try to connect to the SQL tier, and the recovery plan has ensured that the SQL tier is already running. Frontend servers coming up last also ensures that end users do not connect to the application URL by mistake until all the components are up are running and the application is ready to accept requests. To build these dependencies, you can customize the recovery plan to add groups. Then select a virtual machine and change its group to move it between groups.

Recovery Plan example 

Once you complete the customization, you can visualize the exact steps of the recovery. Here is the order of steps executed during the failover of a recovery plan:

  • First there is a shutdown step that attempts to turn off the virtual machines on-premises (except in test failover where the primary site needs to continue to be running)
  • Next it triggers failover of all the virtual machines of the recovery plan in parallel. The failover step prepares the virtual machines’ disks from replicated data.
  • Finally the startup groups execute in their order, starting the virtual machines in each group – Group 1 first, then Group 2, and finally Group 3. If there are more than one virtual machines in any group (for example, a load-balanced web frontend) all of them are booted up in parallel.

Sequencing across groups ensures that dependencies between various application tiers are honored and parallelism where appropriate improves the RTO of application recovery.

Automate most recovery tasks to reduce RTO

Recovering large applications can be a complex task. It is also difficult to remember the exact customization steps post failover. Sometimes, it is not you, but someone else who is unaware of the application intricacies, who needs to trigger the failover. Remembering too many manual steps in times of chaos is difficult and error prone. A recovery plan gives you a way to automate the required actions you need to take at every step, by using Microsoft Azure Automation runbooks. With runbooks, you can automate common recovery tasks like the examples given below. For those tasks that cannot be automated, recovery plans also provide you the ability to insert manual actions.

  • Tasks on the Azure virtual machine post failover – these are required typically so that you can connect to the virtual machine, for example:
    • Create a public IP on the virtual machine post failover
    • Assign an NSG to the failed over virtual machine’s NIC
    • Add a load balancer to an availability set
  • Tasks inside the virtual machine post failover – these reconfigure the application so that it continues to work correctly in the new environment, for example:
    • Modify the database connection string inside the virtual machine
    • Change web server configuration/rules

For many common tasks, you can use a single runbook and pass parameters to it for each recovery plan so that one runbook can serve all your applications. To deploy these scripts yourself and try them out, click the button below and import popular scripts into your Microsoft Azure Automation account.

image
 
With a complete recovery plan that automates the post recovery tasks using automation runbooks, you can achieve one-click failover and optimize the RTO. 

Test failover to be ready for a disaster

A recovery plan can be used to trigger both a failover or a test failover. You should always complete a test failover on the application before doing a failover. Test failover helps you to check whether the application will come up on the recovery site.  If you have missed something, you can easily trigger cleanup and redo the test failover. Do the test failover multiple times until you know with certainty that the application recovers smoothly.

Recovery plan job execution 

Each application is different and you need to build recovery plans that are customized for each. Also, in this dynamic datacenter world, the applications and their dependencies keep changing. Test failover your applications once a quarter to check that the recovery plan is current.

Real-world example – WordPress disaster recovery solution

Watch a quick video of a two-tier WordPress application failover to Microsoft Azure and see the recovery plan with automation scripts, and its test failover in action using Azure Site Recovery.

  • The WordPress deployment consists of one MySQL virtual machine and one frontend virtual machine with Apache web server, listening on port 80.
  • WordPress deployed on the Apache web server is configured to communicate with MySQL via the IP address 10.150.1.40.
  • Upon test failover, the WordPress configuration needs to be changed to communicate with MySQL on the failover IP address 10.1.6.4. To ensure that MySQL acquires the same IP address every time on failover, we will configure the virtual machine properties to have a preferred IP address set to 10.1.6.4.

With relentless focus on ensuring that you succeed with full application recovery, Azure Site Recovery is the one-stop shop for all your disaster recovery needs. Our mission is to democratize disaster recovery with the power of Microsoft Azure, to enable not just the elite tier-1 applications to have a business continuity plan, but offer a compelling solution that empowers you to set up a working end to end disaster recovery plan for 100% of your organization’s IT applications.

You can check out additional product information and start replicating your workloads to Microsoft Azure using Azure Site Recovery today. You can use the powerful replication capabilities of Azure Site Recovery for 31 days at no charge for every new physical server or virtual machine that you replicate, whether it is running on VMware or Hyper-V. To learn more about Azure Site Recovery, check out our How-To Videos. Visit the Azure Site Recovery forum on MSDN for additional information and to engage with other customers, or use the Azure Site Recovery User Voice to let us know what features you want us to enable next.