Astadia Releases Reference Architectures for Migrating Unisys and IBM Mainframe Workloads to Microsoft Azure

The content below is taken from the original (Astadia Releases Reference Architectures for Migrating Unisys and IBM Mainframe Workloads to Microsoft Azure), to continue reading please visit the site. Remember to respect the Author & Copyright.

Astadia

,
the premier technology consultancy focused on modernizing mainframe
workloads, today announced the release of two separate reference
architectures for moving Unisys and IBM mainframe workloads to Microsoft
Azure’s public cloud computing infrastructure. Astadia has moved
mainframe workloads to distributed environments for more than 25 years
and have applied that expertise to architecting the solution for
customers adopting Azure.

Mainframes
still run significant workloads on behalf of commercial and public
sector organizations; yet the cost of maintaining these platforms
increases annually while the availability of skilled workers rapidly
declines. Azure is now ready to support these workloads with security,
performance and reliability, fueling new digital transformation and
innovation.

“For
decades, mainframes have traditionally housed the most mission critical
applications for an organization,” said Scott Silk, Astadia Chairman
and CEO. “Microsoft Azure is ready to take on these workloads and
Astadia is ready to help organizations make the move with a low-cost,
low-risk approach, and then provide ongoing services to manage the
resulting environment.”

“Astadia
has been a trusted Microsoft platform modernization partner for years,”
said Bob Ellsworth, Microsoft’s Director of Enterprise Modernization
and Azure HiPo Partners. “Astadia is a proven mainframe applications and
database consultancy and their focus on Azure will benefit numerous
enterprise companies.”

Astadia’s Mainframe to Azure Reference Architectures Built on Decades of Experience

Astadia
has completed over 200 successful platform modernization projects and
has a proven methodology, best practices and proprietary tools for
discovery, planning, implementation and on-going management and support.
The Mainframe to Cloud Reference Architectures cover the following
topics:

  • Drivers and challenges associated with modernizing mainframe workloads
  • A primer on the specific mainframe architectures
  • A primer on the Microsoft Azure architecture
  • Detailed Reference Architecture diagrams and accompanying narrative

Availability

The Unisys to Microsoft Azure Reference Architecture is available for free download at http://bit.ly/2u5zz2K

The IBM Mainframe to Microsoft Azure Reference Architecture is available for free download at http://bit.ly/2v487zt

skyfonts (5.9.2.0)

The content below is taken from the original (skyfonts (5.9.2.0)), to continue reading please visit the site. Remember to respect the Author & Copyright.

SkyFonts allows you to install fonts from participating sites such as google fonts and keep them up to date

6502 Retrocomputing Goes to the Cloud

The content below is taken from the original (6502 Retrocomputing Goes to the Cloud), to continue reading please visit the site. Remember to respect the Author & Copyright.

In what may be the strangest retrocomputing project we’ve seen lately, you can now access a virtual 6502 via Amazon’s Lambda computing service. We don’t mean there’s a web page with a simulated CPU on it. That’s old hat. This is a web service that takes a block of memory, executes 6502 code that it finds in it, and then returns a block of memory after a BRK opcode or a time out.

You format your request as a JSON-formatted POST request, so anything that can do an HTTP post can probably access it. If you aren’t feeling like writing your own client, the main page has a form you can fill out with some sample values. Just be aware that the memory going in and out is base 64 encoded, so you aren’t going to see instantly gratifying results.

You may not be familiar with Amazon Lambda. It is the logical extension of the Amazon cloud services. Time was that you paid to have a server in a data center. The original Amazon cloud services let you spin up a virtual server that could come into existence when needed. You could also duplicate them, shut them down, and so on. However, Lambda is even one step further. You don’t have a server. You just have a service. When someone makes a request, the Amazon servers handle it. They also handle plenty of other services for other people.

There’s some amount of free service, but eventually, they start charging you for every 100 ms of execution you use. We don’t know how long the average 6502 program runs.

Is it practical? We can’t think of why, but we’ve never let that stop us from liking something before. Just to test it, we put the example code into an online base64 decoder and wound up with this:

a9 01 8d 00 02 a9 05 8d 01 02 a9 08 8d 02 02

Then we went over to an online 6502 disassembler and got:

* = C000
C000 A9 01 LDA #$01
C002 8D 00 02 STA $0200
C005 A9 05 LDA #$05
C007 8D 01 02 STA $0201
C00A A9 08 LDA #$08
C00C 8D 02 02 STA $0202
C00F .END

We then ran the 6502cloud CPU and decoded the resulting memory output to (with a bunch of trailing zeros omitted):

01 05 08 00 00 00 00 00

So for the example, at least, it seems to work.

We’ve covered giant 6502s and small 6502 systems. We have even seen that 6502 code lives inside Linux. But this is the first time we can remember seeing a disembodied CPU accessible by remote access in the cloud.

Filed under: internet hacks, Microcontrollers

Tesla is building world’s largest backup battery in Australia

The content below is taken from the original (Tesla is building world’s largest backup battery in Australia), to continue reading please visit the site. Remember to respect the Author & Copyright.

After blackouts left 1.7 million residents without electricity, Elon Musk famously guaranteed that Tesla could supply 100 megawatts of battery storage in 100 days. The company has announced it will do just that, supplying a Powerpack battery storage system that can run over 30,000 homes. The 100-megawatt project "will be the highest power battery system in the world by a factor of three," tweeted CEO Elon Musk. It will back up the 315 megawatt Hornsdale Wind Farm, charging during low energy usage and providing electricity for peak hours.

Though the company seemed destined to get the job, the South Australian government picked it after a "competitive bidding process," Tesla said. It added that the size of the system will be enough to cover the 30,000 or so homes in the region that were affected by blackouts.


Tesla’s Powerpack battery storage system (AOL/Roberto Baldwin)

Those power outages set off a political conflagration that culminated in a very testy press conference with South Australia’s Premier and the Federal Environment Minister. Shortly afterwards, Prime Minister Malcolm Turnbull unveiled a $1.5 billion plan to expand the power grid to run an additional 500,000 homes, including backup battery storage.

That was when Tesla Energy head Lyndon Rive stepped in and made his "100 megawatts in 100 days" pledge, and (his cousin) Musk upping the ante by promising the system would be free if they didn’t achieve the goal.

Musk confirmed that he’d keep the promise, telling Australia’s ABC News that "if South Australia is willing to take a big risk, then so are we." The 100 day pledge reportedly begins once the grid interconnection agreement is inked, and Musk estimates that it will cost him "probably $50 million or more" if the installation isn’t completed in time.

Via: Elon Musk (Twitter)

Source: Tesla

aws-password-extractor (1.0.2)

The content below is taken from the original (aws-password-extractor (1.0.2)), to continue reading please visit the site. Remember to respect the Author & Copyright.

Behind the scenes of Slovakians’ fight to liberate their .sk domain

The content below is taken from the original (Behind the scenes of Slovakians’ fight to liberate their .sk domain), to continue reading please visit the site. Remember to respect the Author & Copyright.

Analysis The Slovakian internet community is pressuring its government to block the sale of the country’s .sk internet registry, asking for it to “be returned to the people of Slovakia.”

Having run the top-level domain (TLD) for over a decade, registry operator SK-NIC announced earlier this year that it was planning to sell .sk and was in talks with London-based Centralnic, which also operates a number of other general and country-specific top-level domains.

The registry has a healthy 360,000 registered domains and charges €10 wholesale to registrars (who then sell the domains onto internet users with a markup). But its systems are outdated and the domain name market has moved rapidly in recent years with the creation of more than 1,000 new generic TLDs.

That makes .sk a poor investment for SK-NIC – which is owned by one of Slovakia’s largest telcos, DanubiaTel – and a good acquisition for Centralnic, which already has the modern systems and infrastructure to sell domain names.

However when the news broke that a foreign company was looking to buy the .sk registry, protests were launched claiming, among other things, that the registry had been “stolen” and was now being sold off for profit to a foreign company.

One group has set up a petition that has attracted just under 10,000 signatures and aims to “create public pressure on decision-makers and return .SK back to the community and people.”

A number of talks and blog posts have also emerged arguing that a new non-profit organization needs to be set up to run .sk for the people, rather than a publicly listed company.

Those efforts criticize any move away from the current (outdated) registration system, named FRED, any effort to open up registration of .sk domains to people living outside Slovakia, and are increasingly critical of a small number of government officials that seemingly approve of and are pushing the sale.

Huh

And if those last few complaints seem unusually industry specific and political, that’s because of who is behind the campaign: a new breed of internet entrepreneurs who have recently set up their own political party, Progressive Slovakia, and are hoping to steal seats from the mainstream parties.

The people in the party behind the push also happen to be in charge of or employed by the exact same registrars that would profit most from a new non-profit organization running .sk over which they had significant influence.

As for squaring complaints about the .sk registration systems being out of date with insisting that the old FRED registration system be retained: that is almost certainly because it would cost those companies time and money to shift to a new registration system.

And the concerns about making .sk domains available outside Slovakia? It has become common practice for country-code top-level domains to be opened up to anyone worldwide interested in a specific ending. In most cases, it has led to a positive situation where companies use country-specific domains for that market and everyone benefits from a larger registry.

However, if Slovakia’s market is opened up and the registration system is moved to an industry standard, it means that large global domain registrars – like GoDaddy, for example – will bring serious competition overnight.

In addition, the main claim that the Slovakian top-level domain was “stolen” from a university and given to telco company by government officials doesn’t hold much water.

It was extremely common in the early days of the internet for country-code top-level domains to be run by universities and then, as demand grew and national governments became interested, most countries moved to a model exactly like SK-NIC’s: a new non-profit run by a company with technical expertise and with a board structure that included government, business and internet community voices.

The exact same approach was introduced in the UK and .uk domain names with Nominet.

Let’sEncrypt – Wildcard Certificates Coming January 2018

The content below is taken from the original (Let’sEncrypt – Wildcard Certificates Coming January 2018), to continue reading please visit the site. Remember to respect the Author & Copyright.

This will make it easier to secure web servers for internal, non-internet facing/connected tools. This will be especially helpful for anyone whose DNS service does not support DNS-01 hooks for alternative LE verifications. Generate a wildcard CSR on an internet facing server then transfer the valid wildcard cert to the internal server.

 

http://bit.ly/2sLpY1r

Announcing new set of Azure Services in the UK

The content below is taken from the original (Announcing new set of Azure Services in the UK), to continue reading please visit the site. Remember to respect the Author & Copyright.

We’re pleased to announce the following services which are now available in the UK!

Azure Container Service –  Azure Container Service is the fastest way to realize the benefits of running containers in production. It uses customers’ preferred choice of open source technology, tools, and skills, combined with the confidence of solid support and a thriving community ecosystem. Simplified configurations of proven open source container orchestration technology, optimized to run in the Azure cloud, are provided. In just a few clicks, customers can deploy in production container-based applications, and on a framework designed to help manage the complexity of containers deployed at scale. Unlike other container services, Azure Container Service is built on 100% open source software and offer a choice between open source orchestrators Kubernetes, DC/OS, or Docker Swarm with Swarm mode.
The UK region is the first Azure region featuring Docker Swarm mode instead of legacy Swarm.

Learn more about Container Service.

20170628 UK Blog

Log Analytics – Azure Log Analytics is a service in the Operations Management Suite (OMS) offering that monitors your cloud and on-premises environments to maintain their availability and performance. It collects data generated by resources in your hybrid cloud environments and from other monitoring tools to provide insights and analysis and help you detect and respond to issues quickly.
With the availability of Log Analytics in the UK, you can now access a full set of operations management and security services (Log Analytics, Automation, Security Center, Backup and Site Recovery) in the UK.

Learn more about Log Analytics.

Logic Apps –  Logic Apps provide a way to simplify and implement scalable integrations and workflows in the cloud. It provides a visual designer to model and automate your process as a series of steps known as a workflow. Logic Apps is a fully managed iPaaS (integration Platform as a Service) allowing developers not to have to worry about building, hosting, scalability, availability and management. Logic Apps scale up automatically to meet demand.

Learn more about Logic Apps.

Azure Stream Analytics –  Azure Stream Analytics is a fully managed, cost effective real-time event processing engine that helps to unlock deep insights from data. Stream Analytics makes it easy to set up real-time analytic computations on data streaming from devices, sensors, web sites, social media, applications, infrastructure systems, and more.

With a few clicks in the Azure portal, you can author a Stream Analytics job specifying the input source of the streaming data, the output sink for the results of your job, and a data transformation expressed in a SQL-like language. You can monitor and adjust the scale/speed of your job in the Azure portal to scale from a few kilobytes to a gigabyte or more of events processed per second.
Stream Analytics leverages years of Microsoft Research work in developing highly tuned streaming engines for time-sensitive processing, as well as language integrations for intuitive specifications of such.

Learn more about Stream Analytics.

SQL Threat Detection –  SQL Threat Detection provides a new layer of security, which enables customers to detect and respond to potential threats as they occur by providing security alerts on anomalous activities. Users will receive an alert upon suspicious database activities, potential vulnerabilities, and SQL injection attacks, as well as anomalous database access patterns. SQL Threat Detection alerts provide details of suspicious activity and recommend action on how to investigate and mitigate the threat. Users can explore the suspicious events using SQL Database Auditing to determine if they are caused by an attempt to access, breach, or exploit data in the database. Threat Detection makes it simple to address potential threats to the database without the need to be a security expert or manage advanced security monitoring systems.

Learn more about SQL Threat Detection.

SQL Data Sync Public Preview –  SQL Data Sync (Preview) is a service of SQL Database that enables you to synchronize the data you select across multiple SQL Server and SQL Database instances. To synchronize your data, you create sync groups which define the databases, tables and columns to synchronize as well as the synchronization schedule. Each sync group must have at least one SQL Database instance which serves as the sync group hub in a hub-and-spoke topology.

Learn more about Azure SQL Data Sync.

Managed Disks SSE (Storage Service Encryption) –  Azure Storage Service Encryption (SSE) is now supported for Managed Disks. SSE provides encryption-at-rest and safeguards your data to meet your organizational security and compliance commitments.
Starting June 10th, 2017, all new managed disks, snapshots, images and new data written to existing managed disks are automatically encrypted-at-rest with keys managed by Microsoft.

Learn more about Storage Service Encryption for Azure Managed Disks.

We are excited about these additions, and invite customers using the UK Azure region to try them today!

Einride’s self-driving truck looks like a giant freezer on wheels

The content below is taken from the original (Einride’s self-driving truck looks like a giant freezer on wheels), to continue reading please visit the site. Remember to respect the Author & Copyright.

Einride has just revealed the prototype of the T-pod, its autonomous electric truck. The Swedish company’s self-driving vehicle can transport 15 standard pallets and can travel 124 miles on one charge. And because there’s no need for a person to sit inside of it, the T-pod also has no cab space and no windows — giving it a very futuristically odd look.

The truck uses a hybrid driverless system. While on highways, the T-pod drives itself, but on main roads, a human will remotely manage the driving system. People will also monitor T-pods as they drive on highways in case a situation arises that necessitates human control. Einride is currently working on charging stations for the trucks.

Einride isn’t the only company working on driverless shipping trucks. Waymo, Uber and Daimler are among the companies also developing similar vehicles. For shipping at larger scales, self-navigating and remote-controlled ships as well as massive drones are also in the works.

The T-pod prototype isn’t fully developed quite yet, but Einride expects to have its first completed truck available to customers in the fall. By 2020, the company plans to have a fleet of 200 goofy-looking trucks that will travel between Swedish cities Gothenburg and Helsingborg, carrying an expected two million pallets per year.

Source: VentureBeat

MusicBrainz User Survey

The content below is taken from the original (MusicBrainz User Survey), to continue reading please visit the site. Remember to respect the Author & Copyright.

It’s hard to stress how much MusicBrainz depends on the community behind it. In 2016 alone 20.989 editors made a total of 5.935.653 edits at a continuously increasing rate.

But while MusicBrainz does collect data on a lot of different entities, its users are not one of them, and the privacy policy is pretty lean.
Unfortunately this does make it fairly difficult to find out who you are, how you use MB and why you do it.

Seeing as this kind of information is fairly important for the upcoming project of improving our user experience, I volunteered to create a survey to allow you to tell us how you use MB, what you like about it and what you don’t like quite as much.

So without further ado, click on the banner to get to the survey: (It shouldn’t take more than 15 minutes of your time.)
MusicBrainz User Survey

Now if you’re still reading this blog post, that hopefully means you’ve already completed the survey! I’d like to thank Quesito who joined this project earlier this year and has been a great deal of help, our former GCI student Caroline Gschwend who helped with the UX part of the survey, CatQuest who has been around to give great feedback since the first draft and of course also all the other people who helped bring this survey to the point of release.

If you’ve got any feedback on or questions about the survey itself, please reply to the Discourse forum topic.

Volvo will only make electric and hybrid cars starting in 2019

The content below is taken from the original (Volvo will only make electric and hybrid cars starting in 2019), to continue reading please visit the site. Remember to respect the Author & Copyright.

Volvo is done with entirely traditional engines and exclusively gas-powered vehicles, the company announced. By 2019, Volvo group intends to offer only either fully electric or hybrid engines on all new models, making it the first automaker to commit to using only alternative drive trains.

The end of the solely combustion engine-powered car did seem like an eventual inevitability, given the advantages of electric and hybrid from a manufacturing and performance standpoint, and given the industry’s heavy investment in autonomous vehicle tech. But for Volvo to commit to going entirely hybrid and electric just two years from now is still the strongest sign yet we’ve seen that the purely combustion engine’s days might be numbered sooner, rather than later.

Volvo already had a target sales figure of 1 million electric and hybrid cars by 2025, and now that target seems well within reach given it’s all it’ll be selling in terms of new vehicles. Volvo also announced it would launch five new electric and hybrid cars between 2019 and 2021, and that two of those will be made by Polestar, which the company recently announced would become its own subsidiary and brand selling performance EVs, likely to compete with high-end Tesla models.

Part of the cost benefit of making electric cars is dealing with ever stronger emissions requirements on vehicles, which are set to get tighter in most key international markets, including China, which is where Volvo’s owner Geely is based. Production costs of EV parts and batteries are also getting smaller, as capacity and manufacturing processes improve.

Volvo getting a jump on the move to electric, with hybrids included as a transitional stopgap, is a smart and aggressive move for claiming a leadership position in the market of the future. A lot of companies talk about their work with, and commitment to alternative drivetrains, but this is really putting you money where you mouth is.

Featured Image: Doug Geisler

Choosing the right compute option in GCP: a decision tree

The content below is taken from the original (Choosing the right compute option in GCP: a decision tree), to continue reading please visit the site. Remember to respect the Author & Copyright.

By Terrence Ryan, Developer Advocate and Adam Glick, Product Marketing Manager

When you start a new project on Google Cloud Platform (GCP), one of earliest decisions you make is which computing service to use: Google Compute Engine, Google Container Engine, App Engine or even Google Cloud Functions and Firebase.

GCP offers a range of compute services that go from giving users full control (i.e., Compute Engine) to highly-abstracted (i.e., Firebase and Cloud Functions), letting Google take care of more and more of the management and operations along the way.

Here’s how many long-time readers of our blog think about GCP compute options. If you’re
used to managing VMs and want a similar experience in the cloud, pick Compute Engine. If you use containers and Kubernetes, you can abstract away some of the necessary management overhead by using Container Engine. If you want to focus on your code and avoid the infrastructure pieces entirely, use App Engine. Finally, if you want to focus purely on code and build microservices that expose API endpoints for your applications, use Firebase and Cloud Functions.

Over the years, you’ve told us that this model works great if you have no constraints, but can be challenging if you do. We’ve heard your feedback and propose another way to choose your compute options using a constraint-based set of questions. (It should go without saying that we’re considering very small aspects of your project.)

1. Are you building a mobile or HTML application that does its heavy lifting, processing-wise, on the client? If you’re building a thick client that only relies on a backend for synchronization and/or storage, Firebase is a great option. Firebase allows you to store complex NoSQL documents (or objects if that’s how you think of them) and files using a very easy-to-use API and client available for iOS, Android and Javascript. There’s also a REST API for access from other platforms.

2. Are you building a system based more on events than user interaction? In other words, are you building an app that responds to uploaded files, or maybe logins to other applications? Are you already looking at “serverless” or “Functions as a Service” solutions? Look no further than Cloud Functions. Cloud Functions allows you to write Javascript functions that run on Node.js and that can call any one of our APIs including Cloud Vision, Translate, Cloud Storage or over 100 others. With Cloud Functions, you can build complex individual functions that get exposed as microservices to take advantage of all our services without having to maintain systems and glue them all together.

3. Does your solution already exist somewhere else? Does it include licensed software? Does it require anything other than HTTP/S? If you answered “no,” App Engine is worth a look. App Engine is a serverless solution that runs your code on our infrastructure and charges you only for what you use. We scale it up or down for you depending on demand. In addition, App Engine has access to all the Google SDKs available so you can take advantage of the full Google Cloud ecosystem.

4. Are you looking to build a container-based system built on Kubernetes? If you’re already using Kubernetes on GCP, you should really consider Container Engine. (You should think about it wherever you’re going to run Kubernetes actually.) Container Engine reduces building a Kubernetes solutions to a single click. Additionally, it auto-scales Kubernetes cluster members, allowing you to build Kubernetes solutions that grow and contract based on demand.

5. Are you building a stateful system? Are you looking to use GPUs in your solution? Are you building a non-Kubernetes container-based solution? Are you migrating an existing on-prem solution to the cloud? Are you using licensed software? Are using protocols other than HTTP/S? Have you not found another solution to meet your needs? If you answered “yes” to any of these questions, you’re probably going to need to run your solution on virtual machines on Compute Engine. Compute Engine is our most flexible computing product, and allows you the most freedom to configure and manage your VMs however you like.

Put all of these questions together and you get the following flowchart:

This is by no means a comprehensive decision tree, and each one of our products supports a wider range of use cases than is presented here. But this should be a good guide to get you started.

To find out more about or computing solutions please check out Computing on Google Cloud Platform and then try it out for yourself today with $300 in free credits when you sign up.

Happy building!

Alexa is learning more new skills every day

The content below is taken from the original (Alexa is learning more new skills every day), to continue reading please visit the site. Remember to respect the Author & Copyright.

Just two months after Amazon announced it was "doubling down" on its Echo ecosystem, the company has confirmed that its Alexa voice platform has passed 15,000 skills. Impressive, especially in comparison to Google Assistant’s 378 voice apps and Cortana‘s meager 65 — but what’s more impressive is the rate at which Alexa is gaining these skills.

Alexa reached 15,000 skills in June — during this month alone new skill introductions increased by 23 percent. The milestone also represents a 50-percent increase in skills since February, when Amazon officially announced it had hit 10,000 — and even that figure was triple what it was the previous September.

Alexa is gaining skills rapidly, which is no doubt part of Amazon’s plan to maintain its dominance in the voice-powered device landscape — it’s on track to control 70 percent of the market this year. Of course, its acceleration in June may well have been spurred on by Apple’s announcement that a similar product, the Siri-powered HomePod, will be on the market come December.

Does the HomePod represent serious competition to Amazon? Not yet. For a start, the HomePod’s unique selling point seems to be the quality of its speakers, rather than the assistant that comes with it, and no-one knows yet whether third-party developers will be able to create HomePod-compatible apps.

And yet Amazon is expanding its Alexa ecosystem at a dizzying rate, and throwing up some red flags as it does. Developers creating popular game skills are being given cash rewards, but there’s no overarching tool to allow creators to make money from their apps, nor is there a team dedicated to monitoring app service violations.

This focus on ecosystem expansion, instead of refinement, could well cause problems for Amazon down the line, but certainly at this stage the company needn’t worry about the competition. Anyway, no doubt there will soon be a skill that lets Alexa do the worrying instead.

Via: TechCrunch

Source: Voicebot

This little 64-bit NanoPi went to wireless

The content below is taken from the original (This little 64-bit NanoPi went to wireless), to continue reading please visit the site. Remember to respect the Author & Copyright.

The $25 NanoPi Neo Plus2 SBC combines the WiFi, Bluetooth, and 8GB eMMC of the Neo Air with the quad -A53 Allwinner H5 of the Neo2, and boosts RAM to 1GB. Despite bulking up in one dimension to 52 x 40mm, FriendlyElec’s NanoPi Neo Plus2 is still part of the headless, IoT-oriented Neo family, joining […]

Troubleshoot OneNote problems, errors & issues in Windows 10

The content below is taken from the original (Troubleshoot OneNote problems, errors & issues in Windows 10), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft OneNote is an excellent software for gathering information and collaborating with multiple users. The software has been updated and gets better over time – but nothing is perfect at the end of the day, and there may be times when you will have to troubleshoot OneDrive errors & problems. This post runs you through some of the issues which you may face at some point.

OneNote

OneNote problems, errors & issues

If Microsoft OneNote is not working, then this post will help you troubleshoot & fix OneNote problems, errors & issues in Windows 10, and suggest workarounds too.

Open Notebooks created in earlier versions of OneNote

The later versions of OneNote support documents in the 2010-2016 format. If a user tried to open a document created in OneNote 2003 or OneNote 2007, it would open directly. However, the documents could be converted to the appropriate format, so they work well with the later versions of OneNote. This could be done as follows:

  1. Open the notebook on OneNote 2016 or 2013, (even though it might not display properly).
  2. Select the File tab and click on Info.
  3. Next, to the name of your notebook, click on settings and then on properties.
  4. In the window that opens, choose Convert to 2010-2016.
  5. The converted file could be opened on Windows mobile as well.

OneNote can’t open my page or section

If you see “There’s a problem with the contents in this section,” error message, open the notebook in the desktop version of OneNote, which provides notebook recovery options.

SharePoint related errors with OneNote

Most errors reported on OneNote are with sites shared on SharePoint. Please log on to the system as an administrator before proceeding with the steps.

Syncing issue with My SharePoint Notebook

OneNote supports versions of SharePoint that are newer than SharePoint 2010. Older versions won’t be supported and that is a part of the built.

TIP: This post will show you how to enable or disable syncing of files from OneNote.

Turn off  Check-in/Check Out in SharePoint Document Library

  1. Open the document library in SharePoint.
  2. In the ribbon for Library Tools, select Library, then Library settings and then Versioning settings.
  3. Change the value of Require Check Out to No.

Turn off Minor Versions in SharePoint Document Library

  1. Open the document library in SharePoint.
  2. In the ribbon for Library Tools, select Library, then Library settings and then Versioning settings.
  3. Change the value of Document Versioning History to No Versioning.

Turn off Required Properties in SharePoint Document Library

  1. Open the document library in SharePoint.
  2. In the ribbon for Library Tools, select Library, then Library settings.
  3. Find the table titled Columns on the window and check if any of the items under the Required column have a check mark.
  4. Should you find any item marked as required, then set its value to No.

OneNote Quota errors

Storage issues might also be a problem for those working with OneNote. Some of the issues with exceeded quota limits could be mitigated as below, says Microsoft.

To start with, figure out if the notebook is stored on OneDrive or SharePoint. The difference could be analyzed by observing the URL. OneDrive URL’s will have some variant of OneDrive in them. SharePoint URLs are company specific.

  1. If your notebook is on OneDrive, check if you could free space on the OneDrive or else you could buy more space as well.
  2. If you have exceeded the limit for SharePoint, you might need to contact the SharePoint administrator for help.

OneNote is not working

If the OneNote desktop software is not working, you may Repair your Microsoft Office installation via the Control Panel. This will reinstall the Microsoft OneNote software too.

If the OneNote Windows Store app is not working on your Windows 10 PC, then you may uninstall using our 10AppsManager for Windows 10. Once done, you may install it again by searching for in the Windows Store.

More OneNote help topics:

Microsoft made its AI work on a $10 Raspberry Pi

The content below is taken from the original (Microsoft made its AI work on a $10 Raspberry Pi), to continue reading please visit the site. Remember to respect the Author & Copyright.

When you’re far from a cell tower and need to figure out if that bluebird is Sialia sialis or Sialia mexicana, no cloud server is going to help you. That’s why companies are squeezing AI onto portable devices, and Microsoft has just taken that to a new extreme by putting deep learning algorithms onto a Raspberry Pi. The goals is to get AI onto "dumb" devices like sprinklers, medical implants and soil sensors to make them more useful, even if there’s no supercomputer or internet connection in sight.

The idea came about from Microsoft Labs teams in Redmond and Bangalore, India. Ofer Dekel, who manages an AI optimization group at the Redmond Lab, was trying to figure out a way to stop squirrels from eating flower bulbs and seeds from his bird feeder. As one does, he trained a computer vision system to spot squirrels, and installed the code on a $35 Raspberry Pi 3. Now, it triggers the sprinkler system whenever the rodents pop up, chasing them away.

"Every hobbyist who owns a Raspberry Pi should be able to do that," Dekel said in Microsoft’s blog. "Today, very few of them can." The problems is that it’s too expensive and impractical to install high-powered chips or connected cloud-computing devices on things like squirrel sensors. However, it’s feasible to equip sensors and other devices with a $10 Raspberry Zero or the pepper-flake-sized Cortex M0 chip pictured above.


All the squirrel-spotting power you need (Matt Brian/AOL)

To make it work on systems that often have just a few kilobytes of RAM, the team compressed neural network parameters down to just a few bits instead of the usual 32. Another technique is "sparsification" of algorithms, a way of pruning them down to remove redundancies. By doing that, they were able to make an image detection system run about 20 times faster on a Raspberry Pi 3 without any loss of accuracy.

However, taking it to the next level won’t be quite as easy. "There is just no way to take a deep neural network, have it stay as accurate as it is today, and consume 10,000 times less resources. You can’t do it," said Dekel. For that, they’ll need to invent new types of AI tech tailored for low-powered devices, and that’s tricky, considering researchers still don’t know exactly how deep learning tools work.

Microsoft’s researchers are working on a few projects for folks with impairments, like a walking stick that can detect falls and issue a call for help, and "smart gloves" that can interpret sign language. To get some new ideas and help, they’ve made some of their early training tools and algorithms available to Raspberry Pi hobbyists and other researchers on Github. "Giving these powerful machine-learning tools to everyday people is the democratization of AI," says researcher Saleema Amershi.

Via: Mashable

Source: Microsoft

New Power Bundle for Amazon WorkSpaces – More vCPUs, Memory, and Storage

The content below is taken from the original (New Power Bundle for Amazon WorkSpaces – More vCPUs, Memory, and Storage), to continue reading please visit the site. Remember to respect the Author & Copyright.

Are you tired of hearing me talk about Amazon WorkSpaces yet? I hope not, because we have a lot of customer-driven additions on the roadmap! Our customers in the developer and analyst community have been asking for a workstation-class machine that will allow them to take advantage of the low cost and flexibility of WorkSpaces. Developers want to run Visual Studio, IntelliJ, Eclipse, and other IDEs. Analysts want to run complex simulations and statistical analysis using MatLab, GNU Octave, R, and Stata.

New Power Bundle
Today we are extending the current set of WorkSpaces bundles with a new Power bundle. With four vCPUs, 16 GiB of memory, and 275 GB of storage (175 GB on the system volume and another 100 GB on the user volume), this bundle is designed to make developers, analysts, (and me) smile. You can launch them in all of the usual ways: Console, CLI (create-workspaces), or API (CreateWorkSpaces):

One really interesting benefit to using a cloud-based virtual desktop for simulations and statistical analysis is the ease of access to data that’s already stored in the cloud. Analysts can mine and analyze petabytes of data stored in S3 that is effectively local (with respect to access time) to the WorkSpace. This low-latency access will boost productivity and also simplifies the use of other AWS data analysis tools such as Amazon Redshift, Amazon Redshift Spectrum, Amazon QuickSight, and Amazon Athena.

Like the existing bundles, the new Power bundle can be used in either billing configuration, AlwaysOn or AutoStop (read Amazon WorkSpaces Update – Hourly Usage and Expanded Root Volume to learn more). The bundle is available in all AWS Regions where WorkSpaces is available and you can launch one today! Visit the WorkSpaces Pricing page for pricing in your region.

Jeff;

Not sure where to store your bikes? How about a fake skip

The content below is taken from the original (Not sure where to store your bikes? How about a fake skip), to continue reading please visit the site. Remember to respect the Author & Copyright.

Dummy skip provides accommodation for several bicycles and – the theory goes – will not be looked at by thieves

Biskiple

Dummy skip provides accommodation for several bicycles and – the theory goes – will not be looked at by thieves

Ireland the best place to set up a data center in the EU

The content below is taken from the original (Ireland the best place to set up a data center in the EU), to continue reading please visit the site. Remember to respect the Author & Copyright.

A report from a data center consulting group BroadGroup says Ireland is the best place, at least in Europe, to set up a data center. It cites connectivity, taxes and active government support among the reasons.

BroadGroup’s report argued Ireland’s status in the EU, as well as its “low corporate tax environment,” make it an attractive location. It also cites connectivity, as Ireland will get a direct submarine cable system from Ireland to France—bypassing the U.K.—in 2019. The country also has a high installed base of fibre and dark fibre with further deployment planned.

The report also notes active government support for inward investment from companies such as Amazon and Microsoft has resulted in the construction of massive facilities around Dublin.

“Even now, authorities are seeking to identify potential land banks for new large-scale data centre facilities in Ireland, which indicates that the supply of more space will continue to enter the market,” the report says.

U.S. companies with data centers in Ireland

Amazon and Microsoft both have facilities in Dublin, with Microsoft’s being one of the largest in Europe. Now, Apple is looking to build a €850 million data center in Athenry, outside Dublin. It announced the plans two years ago, along with a sister location in Denmark.

Two years later, the Danish site is up and running, while Athenry hasn’t even broken ground due to legal problems because three people objected. Then the decision has been held up because there aren’t enough judges to make a ruling. The ruling is expected to go in Apple’s favor.

Other factors favoring Ireland is that it has benefitted from investment by U.S. firms from the gaming, pharmaceuticals and content sectors making the country their European headquarters. Also, data center investment covers a wide range of business models, making it the main hub for webscales regionally.

Renewable energy is also one reason for Ireland’s shine. EirGrid says potential data center power capacity could increase to 1,000 MW after 2019. Renewable energy—primarily from wind energy—is a key government priority and is targeting 40 percent by 2020, well beyond the EU mandatory benchmark of 16 percent. The proposed Apple data center would be powered 100 percent by renewable energy.

Of course, Ireland isn’t alone with its data center ambitions. Scotland recently saw the opening of a 60,000-sq.-ft. data center that can be expanded to 500,000 square feet.

Join the Network World communities on

Facebook

and

LinkedIn

to comment on topics that are top of mind.

New troubleshooting and diagnostics for Azure Files Storage mounting errors on Windows

The content below is taken from the original (New troubleshooting and diagnostics for Azure Files Storage mounting errors on Windows), to continue reading please visit the site. Remember to respect the Author & Copyright.

Azure File Storage offers fully managed file shares in the cloud using the Server Message Block (SMB) protocol, which is the predominantly used file share protocol for on-premises Windows use cases. Azure Files can be mounted from any client OS that implements the SMB versions supported by Azure Files. Today, we are introducing AzFileDiagnostics to help first time Azure Files file share users ensure that the Windows client environment has the correct prerequisites. AzFileDiagnostics automates detection of most of the symptoms mentioned in the troubleshooting Azure Files article and helps set up your environment and receive optimal performance.

In general, mounting a file share can be simply achieved on Windows using a standard “net use” command. When you create a share, Azure Portal automatically generates a “net use” command and makes it available for copy and pasting. One can simply click on the “Connect” button, copy the command for mounting this file share on your client, paste it and you have a drive with mounted file share. What could go wrong? Well, as it turns out, use of different clients, SMB versions, firewall rules, ISPs, or IT policies can affect connectivity to Azure Files. Good news is AzFileDiagnostics isolates and examines each source of possible issues and in turn provides you with advice or workarounds to correct the problem.

As an example, Azure Files supports SMB protocol version 2.1 and 3.0. To ensure secure connectivity, Azure Files requires communication from another region or from on premises to be encrypted. Thus, requiring SMB 3.0 channel encryption for those use-cases. AzFileDiagnostics detects the SMB version on the client and determines whether the client meets the necessary encryption requirement automatically.

How to use AzFileDiagnostics

You can download AzFileDiagnostics from Script center today and simply run:

PowerShell Command:

AzFileDiagnostics.ps1 [-StorageAccountName <storage account name>] [-FileShareName <share name>] [-EnvironmentName <AzureCloud| AzureChinaCloud| AzureGermanCloud| AzureUSGovernment>]

Usage Examples:

AzFileDiagnostics.ps1 
AzFileDiagnostics.ps1 -UncPath \\storageaccountname.file.core.windows.net\sharename 
AzFileDiagnostics.ps1 -StorageAccountName storageaccountname –FileShareName sharename –Environment AzureCloud 

In addition to diagnosing issues, it will present you with an option to mount the file share when the checks have successfully completed.

Learn more about Azure Files

Feedback

We hope that AzFileDiagnostics will make your getting started experience smoother. We love to hear your feedback. If there are additional troubleshooting topics for Azure Files that you would like to see, please leave a comment below. In addition to this, if you have any feature request, we are always listening to your feedback on our User Voice. Thanks!

VMware prepping NSX-as-a-service running from the public cloud

The content below is taken from the original (VMware prepping NSX-as-a-service running from the public cloud), to continue reading please visit the site. Remember to respect the Author & Copyright.

The content catalog for VMworld 2017 has appeared and as usual offers a few hints about announcements at the show and the company’s future plans.

Perhaps most interesting are the sessions pertaining to VMware’s partnership with Amazon Web Services. One is titled “VMware NSXaaS – Secure Native Workloads in AWS”. The session description says “VMWare NSXaaS provides you the ability to manage Networking and Security policies in Public Cloud environments such as AWS.”

Once we saw that “NSXaaS” reference we quickly spotted job ads that say “VMware NSX Team is building an elite team of Devops/SRE engineers to run our crown jewel project “NSXaaS” on Public Cloud.” Whoever gets the gig will be “… responsible to run NSX as a Service Reliably with no down time. This will include proactively finding service reliability issues & resolving them as well as responding to customer tickets as a line of defense before involving development engineering.”

Suffice to say, it looks like VMware’s going to NSX-as-a-service, which is interesting!

Another session, “VMware Cloud on AWS – Getting Started Workshop” offers the chance to “Be among the first to see the new VMware on AWS solution. You will interact with the VMware Cloud interface to perform basic tasks and manage your public cloud capacity.” That description is similar to other AWS-related sessions in that it offers demos of actual services, which suggests to The Register‘s virtualization desk that come VMworld USA in late August VMware-on-AWS will either have been launched or be very close to a debut.

Session titles like “VMware Cross Cloud Services – Getting Started” suggest Cross Cloud will also debut at or before the show.

A session titled “VMware Integrated OpenStack 4.0: What’s New” suggests a new release is in the works, given that we’re currently on version 3.1.

“VMware Cloud Foundation Futures” promises to show off “exciting new work being done using VCF as a platform in the areas of edge computing clusters, network function virtualisation, predictive analytics, and compliance.”

“Storage at Memory Speed: Finally, Nonvolatile Memory Is Here” looks like being VMware’s explanation of how it will put byte-addressable non-volatile memory, which it calls “PMEM” and the rest of us call Optane and pals, to work. The session promises “an overview of VMware virtualization for PMEM that is now running on real PMEM products.” Speed improvements from PMEM aren’t automatic, so it will be interesting to see what Virtzilla’s come up with.

VMware’s meat and potatoes – vSphere, vCenter and the like – don’t look to have a lot new to discuss other than enhancements to PowerCLI and the vSphere HTML 5 client.

Desktop hypervisors usually get a VMworld refresh and the catalog mentions “innovations being added to VMware Fusion, VMware Workstation, and VMware Horizon FLEX” in a session titled “What’s New with …” the above-mentioned products.

There’s no session description we could find that mentions VMware App Defence, the long-awaited security product The Register believes will emerge in Q3, but the catalog is sprinkled with mentions of endpoint security and VMware’s willingness to make it better with virtualization.

VMworld Europe is in September this year, so it also fits the Q3 timeframe if VMware wanted to keep the announcement of its new security offering as the big news for its continental followers.

If you spot another session that hints at new products or directions, hit the comments! ®

Tanks for the memories: Building a post-Microsoft Office cloud suite

The content below is taken from the original (Tanks for the memories: Building a post-Microsoft Office cloud suite), to continue reading please visit the site. Remember to respect the Author & Copyright.

Analysis Microsoft for decades not only defined personal productivity and team collaboration using Office, Outlook and Exchange – it kept the competition at arm’s length.

Today, however, there’s a large community of businesses that don’t look to Microsoft for collaboration or productivity solutions at all. In fact, Microsoft doesn’t even appear on their radar when they think of the cloud, the successor to on-premises software such as Office.

If you don’t have the complexity of legacy systems to integrate with or a need for complicated macros in Excel, why would you look to the ageing software giant?

Doesn’t it make sense to go with cool, simple Software-as-a-Service solutions like G Suite, Dropbox or Slack?

Google is the single largest alternative force to Microsoft and Office 365 out there, but if you were to begin building an alternative stack, what might it look like exactly?

Mail and Calendar

Google smashed the consumer email market with Gmail and Google Calendar, quickly becoming more favoured than Outlook.com. You’ll find both products in the business-grade G Suite plan, along with eDiscovery and archiving capabilities. They have mobile apps (iOS and Android) and web browser access from the desktop.

Not surprisingly, Google Chrome is the preferred browser for full functionality, including offline access. Yes, you can use Chrome to view and send emails when you’re disconnected (if enabled by your admin), if you’ve installed the Chrome plugin and synced it first. By default, it only syncs seven days’ worth of emails and tweaking a setting will give you one month of data, max.

There’s also an offline limitation of only being able to send attachments that are less than 5MB in size. In the Google world, stars replace email flags and labels kind of replace folders. Messages can have multiple labels so they show up in multiple places, including staying in your Inbox and also displaying a personal label. Remove the Inbox label (via Move To) and your email is now only in one folder (which is actually a label view).

Calendar is pretty standard, easily sharable and it integrates with Google Hangouts (like Skype integrates with Exchange online) for scheduling & displaying online meetings. Third-party vendors jumped on G Suite integration quickly, but the gap is closing. It’s getting harder to find apps that integrate with G Suite and don’t also talk to Office 365, but they do exist. Pipedrive, Freshdesk and Mavenlink all prefer to talk to Google products.

Add-in Rapportive adds Microsoft-owned LinkedIn information to Gmail, but not to Office 365 (you’ll need something like Full Contact for that). Hopefully LinkedIn integration is something that Microsoft will nail, but we’re still waiting.

Documents, spreadsheets and presentations

For this Cloud-based discussion, we’ll leave LibreOffice out of it. Google’s Docs, Sheets and Slides allow standard word processing, number crunching and presentation templates in your browser. Want to email them? You’ll be sending a link to the file’s web location in Google Drive and the recipient will need a Google Account to even view them.

Google will argue that everyone has a Google account and if you do, it’s a very simple process to share and co-edit your files, including with people outside your organisation. Offline access is achieved through a Google Chrome plugin and you get a pseudo-form of the file, not something you can actually copy across to a USB stick. Crossing the streams here is not as effective as Google will have you believe. Yes, you can use Docs to open a Word doc and Sheets to open an Excel spreadsheet, but the formatting can be compromised depending on the complexity of the contents.

The worst “feature” is the way Google Drive handles editing of Office files. It creates a separate file that’s Google-compatible, every time you edit a Microsoft Office file. Because they are separate files, you have no version history tracking and you get multiple files with the same filename in your Drive folder. It also doesn’t lock the original Office file or support co-authoring, so your colleagues can make their own changes and save their own versions at the same time. This isn’t a problem if you live in a purely Google world, but I’ve seen finance departments of Googlefied companies cling to Excel.

London is the second city to get free gigabit WiFi kiosks

The content below is taken from the original (London is the second city to get free gigabit WiFi kiosks), to continue reading please visit the site. Remember to respect the Author & Copyright.

London’s countless telephone boxes become more redundant with every new mobile contract signed and throwaway tourist SIM purchased. Having a mind to update these payphones for the modern age, BT — which owns the majority of them — announced last year it had teamed up with the same crew behind New York’s LinkNYC free gigabit WiFi kiosks to make that happen. The first of these, installed along London’s Camden High Street, have been switched on today, offering the fastest public WiFi around, free phone calls, USB charging, maps, directions and other local info like weather forecasts, Tube service updates and community messages.

While the London kiosks have a slightly different name (InLinks as opposed to just Links), they are identical in what they offer, and are also funded entirely by advertising revenue generated from the large screens on either side of the monoliths. Intersection — the affiliate of Alphabet’s Sidewalk Labs that leads the Link projects — decided not to enable free internet access through the kiosks’ in-built tablets in its second city, though. This feature had to be disabled in New York, you might remember, due to a public porn problem.

Like the LinkNYC program, later plans for the UK’s next-gen phone boxes include temperature, traffic, air and noise pollution sensors. The idea being the environmental monitoring aspect will create the data streams needed for future smart city projects. New York City now hosts almost 900 free gigabit booths, with "thousands more" to be installed over the next few years. By comparison, London’s starting small with only a handful of cabinets along one major street, but many more are expected to spring up around the capital and in other large UK cities before the year’s out.

Source: BT, InLinkUK

TEMPEST In A Software Defined Radio

The content below is taken from the original (TEMPEST In A Software Defined Radio), to continue reading please visit the site. Remember to respect the Author & Copyright.

In 1985, [Wim van Eck] published several technical reports on obtaining information the electromagnetic emissions of computer systems. In one analysis, [van Eck] reliably obtained data from a computer system over hundreds of meters using just a handful of components and a TV set. There were obvious security implications, and now computer systems handling highly classified data are TEMPEST shielded – an NSA specification for protection from this van Eck phreaking.

Methods of van Eck phreaking are as numerous as they are awesome. [Craig Ramsay] at Fox It has demonstrated a new method of this interesting side-channel analysis using readily available hardware (PDF warning) that includes the ubiquitous RTL-SDR USB dongle.

The experimental setup for this research involved implementing AES encryption on two FPGA boards, a SmartFusion 2 SOC and a Xilinx Pynq board. After signaling the board to run its encryption routine, analog measurement was performed on various SDRs, recorded, processed, and each byte of the key recovered.

The results from different tests show the AES key can be extracted reliably in any environment, provided the antenna is in direct contact with the device under test. Using an improvised Faraday cage constructed out of mylar space blankets, the key can be reliably extracted at a distance of 30 centimeters. In an anechoic chamber, the key can be extracted over a distance of one meter. While this is a proof of concept, if this attack requires direct, physical access to the device, the attacker is an idiot for using this method; physical access is root access.

However, this is a novel use of software defined radio. As far as the experiment itself is concerned, the same result could be obtained much more quickly with a more relevant side-channel analysis device. The ChipWhisperer, for example, can extract AES keys using power signal analysis. The ChipWhisperer does require a direct, physical access to a device, but if the alternative doesn’t work beyond one meter that shouldn’t be a problem.

Shark Week: 6 Tips to Secure Your IT Tackle Box

The content below is taken from the original (Shark Week: 6 Tips to Secure Your IT Tackle Box), to continue reading please visit the site. Remember to respect the Author & Copyright.

shark-6tips 

Article Written by Erik Brown, CTO at GigaTrust

Scientists recently
dispelled the myth that sharks attack humans because they mistake them for
other prey. In fact, sharks can see clearly below the murky waters. But,
it’s not as easy for victims of phishing attacks to see what’s lurking behind
an attached document or link within an email.

Email is the lifeblood of
communications for organizations around the world. Among the 296 billion emails
sent daily, there are dangerous emails lurking within. A successful email
attack can cost companies as much as $4 million per incident. In honor of Discovery Channel’s upcoming Shark Week, let’s
look at what these dangerous and misunderstood creatures can teach us about
email and document security.

Beware of Phishing Attacks: Phishing attacks use “bait” to catch their victims and can cause significant
damage. The 2016 DNC Hack, for example, was a pretty large bite: a
leak of 19,252 emails and 8,034 attachments. Like a good fisherman,
organizations should test their lines in advance by training their employees
and conducting mock attackts. To minimize the damage of a leak, a security system that enables encrypted email and security document collaboration should be considered.

Know the Landscape: There
are over 400 species of sharks wordwide, and 2016 had a record number of shark
attacks and bites (107). Just as most beaches are safe, emails are a
common part of business and are generally benign. As vacationers flock to
beaches this summer, they should swim with confidence yet be aware of their
surroundings. Don’t venture into deep water alone, and use the buddy system to
keep track of your family and friends. Employees should send and read their
emails with confidence as well, and have the ability to secure critical (deep
water) emails sent both inside and outside the company. A secure collaboration system that
provides anyone-to-anyone secure document sharing can ensure that critical content
is protected from harmful attacks.

Confidential Documents are Blood in the Water: Sharks have a very acute
sense of smell and detect injured creatures from miles away. They prey on a
variety of sea life and their attack can be swift and vicious. Hackers send
phishing attacks across an entire organization and when they detect an entry
point, they pounce. When
employees email confidential documents, the sensitive information can fall prey
 to these attacks and cause massive
damage. Enterprises can further improve security by encrypting confidential
information on disk (at rest), during communication (in transit), and while
viewing and editing (in use).

Just Keep Swimming: Some
species of sharks have
to move constantly to survive
. Hackers are constantly
growing new teeth in the form of ever more sophisticated attacks, so IT
administrators should stay on top of the latest security news and threats.  Applying security updates and evolving
enterprise systems will help stay ahead of possible attacks.

Analyze the Depths: A
shark’s body is supported by cartilage rather than bones, which helps them swim
comfortably at multiple depths of water., Security professionals can get
comfortable with the information they track, but hackers are swimming at
multiple depths. Look for ways to gather and analyze new types of data to help
detect malicious activities. Tracking the movement of and interaction with confidential
email and documents is one way to gain insight into behavior across an
organization. This and other behavior analytics can alert administrators to
suspicious activity when an attack is in progress or before it really begins.

Layers of Personalities: Recent studies have indicated that sharks can have
distinct personalities. Good fishermen know this. They ensure their bait and
tackle is ready; they know which type of bait will lure different fish or
sharks; the understand the strength of their lines and tackle. Enterprises also
need to be prepared to protect their employees and information, especially as
corporate data is increasingly accessed by remote employees and contractors on mobile
devices. It’s virtually impossible
for an enterprise to oversee the security and usage of every access point into
the enterprise, and breaches can happen when individual files are viewed or
shared. Adopting a layered security approach that considers different entry
points and scenarios provides broad protection for the organization. While
preventing attacks is the best option, be prepared to detect and respond to
possible attacks that your prevention systems might miss. If a hacker gains
access to critical internal systems, is the organization prepared?  Is data secure and access restricted within
the corporate network?

IT
professionals navigate a sea of potential threats, and they never know when a
shark may be lurking just out of sight. The ideas presented here will help
enterprises prepare for the hackers (sharks) that may be swimming in your part
of the Internet.

##

About the Author

Erik-Brown 

Erik Brown joined GigaTrust in 2017 as Chief Technology Officer where he is responsible for the IT, engineering, and customer service functions.  He has over 25 years’ experience working with new and emerging technologies, most recently with mobile development. Erik’s career includes technology positions in successful start-ups and Fortune 500 companies. He has worked as a developer, architect, and leader in mobile development, digital imaging, Internet search, and healthcare. He also brings his experience with patent development, and as a technical author and conference speaker to the company.

Prior to joining GigaTrust, Erik served as an Associate Vice President, Innovation and Delivery Services in Molina Healthcare’s IT department where he oversaw a team of 40 people focused on improving and standardizing the use of new technology. He spearheaded the development and deployment of Molina’s first mobile application for home-based assessments, and created an internal Incubator program for identifying and funding new ideas within the IT department. Erik also worked as Program Manager and Architect in Unisys Corporation’s Federal Systems group as well as at several successful start-up companies, including Transarc Corporation (purchased by IBM in 1994) and PictureVision, Inc. (purchased by Eastman Kodak in 2000).

Erik is the author of two well-received books on Windows Forms programming, and has spoken at numerous conferences including the 2014 mHealth Summit. He is a graduate of the Society for Information Management’s Regional Leadership Forum, and is a certified project manager and scrum master (PMP, PMI-RMP, CSM, and ITIL). Erik holds a BS and MS degree in Mathematics from Carnegie-Mellon University.