Free SANS Cyber Security Summits: Sign up now, learn online, keep your network safe

The content below is taken from the original ( Free SANS Cyber Security Summits: Sign up now, learn online, keep your network safe), to continue reading please visit the site. Remember to respect the Author & Copyright.

Sometimes you need to lift yourself out of the cybersec trenches and look up to the summit

Promo Keeping your organization safe from cybercriminals and other ne’er do wells requires constant honing and refining of your own skills and knowledge.…

13 best practices for user account, authentication, and password management, 2021 edition

The content below is taken from the original ( 13 best practices for user account, authentication, and password management, 2021 edition), to continue reading please visit the site. Remember to respect the Author & Copyright.

Updated for 2021: This post includes updated best practices including the latest from Google’s Best Practices for Password Management whitepapers for both users and system designers.

Account management, authentication and password management can be tricky. Often, account management is a dark corner that isn’t a top priority for developers or product managers. The resulting experience often falls short of what some of your users would expect for data security and user experience.

Fortunately, Google Cloud brings several tools to help you make good decisions around the creation, secure handling and authentication of user accounts (in this context, anyone who identifies themselves to your system—customers or internal users). Whether you’re responsible for a website hosted in Google Kubernetes Engine, an API on Apigee, an app using Firebase, or other service with authenticated users, this post lays out the best practices to follow to ensure you have a safe, scalable, usable account authentication system.

1. Hash those passwords

My most important rule for account management is to safely store sensitive user information, including their password. You must treat this data as sacred and handle it appropriately.

Do not store plaintext passwords under any circumstances. Your service should instead store a cryptographically strong hash of the password that cannot be reversed—created with Argon2id, or Scrypt. The hash should be salted with a value unique to that specific login credential. Do not use deprecated hashing technologies such as MD5, SHA1 and under no circumstances should you use reversible encryption or try to invent your own hashing algorithm. Use a pepper that is not stored in the database to further protect the data in case of a breach. Consider the advantages of iteratively re-hashing the password multiple times.

Design your system assuming it will be compromised eventually. Ask yourself “If my database were exfiltrated today, would my users’ safety and security be in peril on my service or other services they use?” As well as “What can we do to mitigate the potential for damage in the event of a leak?”

Another point: If you could possibly produce a user’s password in plaintext at any time outside of immediately after them providing it to you, there’s a problem with your implementation.

If your system requires detection of near-duplicate passwords, such as changing “Password” to “pAssword1”, save the hashes of common variants you wish to ban with all letters normalized and converted to lowercase. This can be done when a password is created or upon successful login for pre-existing accounts. When the user creates a new password, generate the same type of variants and compare the hashes to those from the previous passwords. Use the same level of hashing security as with the actual password. 

2. Allow for third-party identity providers if possible

Third-party identity providers enable you to rely on a trusted external service to authenticate a user’s identity. Google, Facebook, and Twitter are commonly used providers.

You can implement external identity providers alongside your existing internal authentication system using a platform such as Identity Platform. There are a number of benefits that come with Identity Platform, including simpler administration, a smaller attack surface, and a multi-platform SDK. We’ll touch on more benefits throughout this list.

3. Separate the concept of user identity and user account

Your users are not an email address. They’re not a phone number. They’re not even a unique username. Any of these authentication factors should be mutable without changing the content or personally identifiable information (PII) in the account. Your users are the multi-dimensional culmination of their unique, personalized data and experience within your service, not the sum of their credentials. A well-designed user management system has low coupling and high cohesion between different parts of a user’s profile.

Keeping the concepts of user account and credentials separate will greatly simplify the process of implementing third-party identity providers, allowing users to change their username, and linking multiple identities to a single user account. In practical terms, it may be helpful to have an abstract internal global identifier for every user and associate their profile and one or more sets of authentication datavia that ID as opposed to piling it all in a single record.

4. Allow multiple identities to link to a single user account

A user who authenticates to your service using their username and password one week might choose Google Sign-In the next without understanding that this could create a duplicate account. Similarly, a user may have very good reason to link multiple email addresses to your service. If you’ve properly separated user identity and authentication, it will be a simple process to link several authentication methods to a single user.

Your backend will need to account for the possibility that a user gets part or all the way through the signup process before they realize they’re using a new third-party identity not linked to their existing account in your system. This is most simply achieved by asking the user to provide a common identifying detail, such as email address, phone, or username. If that data matches an existing user in your system, require them to also authenticate with a known identity provider and link the new ID to their existing account.

5. Don’t block long or complex passwords

NIST publishes guidelines on password complexity and strength. Since you are (or will be very soon) using a strong cryptographic hash for password storage, a lot of problems are solved for you. Hashes will always produce a fixed-length output no matter the input length, so your users should be able to use passwords as long as they like. If you must cap password length, do so based on the limits of your infrastructure; often this is a matter of memory usage (memory used per login operation * potential concurrent logins per machine), or more likely—the maximum POST size allowable by your servers. We’re talking numbers from hundreds of KB to over 1MB. Seriously. Your application should already be hardened to prevent abuse from large inputs. This doesn’t create new opportunities for abuse if you employ controls to prevent credential stuffing and hash the input as soon as possible to free up memory.

Your hashed passwords will likely already consist of a small set of ASCII characters. If not, you can easily convert a binary hash to Base64. With that in mind, you should allow your users to use literally any characters they wish in their password. If someone wants a password made of Klingon, Emoji, and ASCII art with whitespace on both ends, you should have no technical reason to deny them. Just make sure to perform Unicode normalization to ensure cross-platform compatibility. See our system designers whitepaper (PDF) for more information on Unicode and supported characters in passwords.

Any user attempting to use an extreme password is probably following password best practices (PDF) including using a password manager, which allows the entry of complex passwords even on limited mobile device keyboards. If a user can input the string in the first place (i.e., the HTML specification for password input disallows line feed and carriage return), the password should be acceptable.

6. Don’t impose unreasonable rules for usernames

It’s not unreasonable for a site or service to require usernames longer than two or three characters, block hidden characters, and prevent whitespace at the beginning and end of a username. However, some sites go overboard with requirements such as a minimum length of eight characters or by blocking any characters outside of 7-bit ASCII letters and numbers.

A site with tight restrictions on usernames may offer some shortcuts to developers, but it does so at the expense of users and extreme cases will deter some users.

There are some cases where the best approach is to assign usernames. If that’s the case for your service, ensure the assigned username is user-friendly insofar as they need to recall and communicate it. Alphanumeric generated IDs should avoid visually ambiguous symbols such as “Il1O0.” You’re also advised to perform a dictionary scan on any randomly generated string to ensure there are no unintended messages embedded in the username. These same guidelines apply to auto-generated passwords.

7. Validate the user’s identity

If you ask a user for contact information, you should validate that contact as soon as possible. Send a validation code or link to the email address or phone number. Otherwise, users may make a typo in their contact info and then spend considerable time using your service only to find there is no account matching their info the next time they attempt login. These accounts are often orphaned and unrecoverable without manual intervention. Worse still, the contact info may belong to someone else, handing full control of the account to a third party.

8. Allow users to change their username

It’s surprisingly common in legacy systems or any platform that provides email accounts not to allow users to change their username. There are very good reasons not to automatically release usernames for reuse, but long-term users of your system will eventually come up with significant reasons to use a different username and they likely won’t want to create a new account.

You can honor your users’ desire to change their usernames by allowing aliases and letting your users choose the primary alias. You can apply any business rules you need on top of this functionality. Some orgs might limit the number of username changes per year or prevent a user from displaying or being contacted via anything but their primary username. Email address providers are advised to never re-issue email addresses, but they could alias an old email address to a new one. A progressive email address provider might even allow users to bring their own domain name and have any address they wish.

If you are working with a legacy architecture, this best practice can be very difficult to meet. Even companies like Google have technical hurdles that make this more difficult than it would seem. When designing new systems, make every effort to separate the concept of user identity and user account and allow multiple identities to link to a single user account and this will be a much smaller problem. Whether you are working on existing or greenfield code, choose the right rules for your organization with an emphasis on allowing your users to grow and change over time.

9. Let your users delete their accounts

A surprising number of services have no self-service means for a user to delete their account and associated PII. Depending on the nature of your service, this may or may not include public content they created such as posts and uploads. There are a number of good reasons for a user to close an account permanently and delete all their PII . These concerns need to be balanced against your user experience, security, and compliance needs. Many if not most systems operate under some sort of regulatory control (such as PCI or GDPR), which provides specific guidelines on data retention for at least some user data. A common solution to avoid compliance concerns and limit data breach potential is to let users schedule their account for automatic future deletion.

In some circumstances, you may be legally required to comply with a user’s request to delete their PII in a timely manner. You also greatly increase your exposure in the event of a data breach where the data from “closed” accounts is leaked.

10. Make a conscious decision on session length

An often overlooked aspect of security and authentication is session length. Google puts a lot of effort into ensuring users are who they say they are and will double-check based on certain events or behaviors. Users can take steps to increase their security even further.

Your service may have good reason to keep a session open indefinitely for non-critical analytics purposes, but there should be thresholds after which you ask for password, 2nd factor, or other user verification.

Consider how long a user should be able to be inactive before re-authenticating. Verify user identity in all active sessions if someone performs a password reset. Prompt for authentication or 2nd factor if a user changes core aspects of their profile or when they’re performing a sensitive action. Re-authenticate if the user’s location changes significantly in a short period of time. Consider whether it makes sense to disallow logging in from more than one device or location at a time.

When your service does expire a user session or requires re-authentication, prompt the user in real time or provide a mechanism to preserve any activity they have not saved since they were last authenticated. It’s very frustrating for a user to take a long time to fill out a form, only to  find all their input has been lost and they must log in again.

11. Use 2-Step Verification

Consider the practical impact on a user of having their account stolen when choosing 2-Step Verification (also known as two-factor authentication, MFA, or 2FA) methods. Time-based one-time passwords (TOTP), email verification codes, or “magic links” are consumer-friendly and relatively secure. SMS 2FA auth has been deprecated by NIST due to multiple weaknesses, but it may be the most secure option your users will accept for what they consider a trivial service.

Offer the most secure 2FA auth you reasonably can. Hardware 2FA such as the Titan Security Key are ideal if feasible for your application. Even if a TOTP library is unavailable for your application, email verification or 2FA provided by third-party identity providers is a simple means to boost your security without great expense or effort. Just remember that your user accounts are only as secure as the weakest 2FA or account recovery method.

12. Make user IDs case-insensitive

Your users don’t care and may not even remember the exact case of their username. Usernames should be fully case-insensitive. It’s trivial to store usernames and email addresses in all lowercase and transform any input to lowercase before comparing. Make sure to specify a locale or employ Unicode normalization on any transformations.

Smartphones represent an ever-increasing percentage of user devices. Most of them offer autocorrect and automatic capitalization of plain-text fields. Preventing this behavior at the UI level might not be desirable or completely effective, and your service should be robust enough to handle an email address or username that was unintentionally auto-capitalized.

13. Build a secure auth system

If you’re using a service like Identity Platform, a lot of security concerns are handled for you automatically. However, your service will always need to be engineered properly to prevent abuse. Core considerations include implementing a password reset instead of password retrieval, detailed account activity logging, rate-limiting login attempts to prevent credential stuffing, locking out accounts after too many unsuccessful login attempts, and requiring two-factor authentication for unrecognized devices or accounts that have been idle for extended periods. There are many more aspects to a secure authentication system, so please see the further reading section below for links to more information. 

Further reading

There are a number of excellent resources available to guide you through the process of developing, updating, or migrating your account and authentication management system. I recommend the following as a starting place:

Related Article

Cybersecurity Awareness Month—New security announcements for Google Cloud

Today’s announcements include new security features, whitepapers that explore our encryption capabilities, and use-case demos to help dep…

Read Article

An Arduino With A Floppy Drive

The content below is taken from the original ( An Arduino With A Floppy Drive), to continue reading please visit the site. Remember to respect the Author & Copyright.

For many of us the passing of the floppy disk is unlamented, but there remains a corps of experimenters for whom the classic removable storage format still holds some fascination. The interface for a floppy drive might have required some complexity back in the days of 8-bit microcomputers, but even for today’s less accomplished microcontrollers it’s a surprisingly straightforward hardware prospect. [David Hansel] shows us this in style, with a floppy interface, software library, and even a rudimentary DOS, for the humble Arduino Uno.

The library provides functions to allow low level work with floppy disks, to read them sector by sector. In addition it incorporates the FatFS library for MS-DOS FAT file-level access, and finally the ArduDOS environment which allows browsing of files on a floppy. The pictures show a 3.5″ drive, but it also supports 5.25″ units and both DD and HD drives. We can see that it will be extremely useful to anyone working with retrocomputer software who is trying to retrieve old disks, and we look forward to seeing it incorporated in some retrocomputer projects.

Of course, Arduino owners needn’t have all the fun when it comes to floppy disks, the Raspberry Pi gets a look-in too.

PiTools released by R-Comp Interactive

The content below is taken from the original ( PiTools released by R-Comp Interactive), to continue reading please visit the site. Remember to respect the Author & Copyright.

With the launch of their 4té computer in the latter part of last year, R-Comp developed a set of tools to run on the system, neatly wrapped up in an… Read more »

Retire your tech debt: Move vSphere 5.5+ to Google Cloud VMware Engine

The content below is taken from the original ( Retire your tech debt: Move vSphere 5.5+ to Google Cloud VMware Engine), to continue reading please visit the site. Remember to respect the Author & Copyright.

It can happen so easily. You get a little behind on your payments. Then you start falling farther and farther behind until it becomes almost impossible to dig yourself out of debt. Tech debt, that is. 

IT incurs a lot of tech debt when it comes to keeping up infrastructure; most IT departments are already running as lean as they possibly can. Many VMware shops are in a particularly tough spot, especially if they’re still running on vSphere 5.5. If that describes you, it’s time to ask yourself how you intend to get out of this tech debt? General support for vSphere 5.5 ended back in September 2018, and technical guidance one year later. General support for 6.0 ended in March 2020, support for 6.5 ends November 15 of this year, and even the end of general support for vSphere 6.7 is only a couple of years away (November, 2022)! If you’re still running vSphere 5.5, moving to vSphere 7.0 is the right thing to do.

But doing so is hard if you’ve fallen into a deep tech-debt hole.Traditionally, it means moving all your outdated vSphere systems through all the interim releases until you’ve migrated all your systems to the latest version. That involves upgrading hardware, software, and licenses, as well as all the additional work that goes along with the upgrades. Then, as soon as you’re done, the next upgrade cycle is already upon you. Making the task even more daunting, VMware HCX—the company’s application mobility service—will also stop supporting 5.5 soon, making migration even more complicated.

If this paints an unsightly picture, don’t despair. You have the opportunity, right now, to easily retire your technical debt and be debt-free from here on out by migrating to Google Cloud VMware Engine. And you can migrate before you have to upgrade to the next vSphere release just to get migration support. Not only will you still be able to migrate to vSphere 7 using HCX, but even better, you don’t have to do the digging yourself.

The cloud breaks the cycle of debt

If the effort and resources required to move was too steep a price before, now it’s a viable option with Google Cloud VMware Engine. With cloud-based infrastructure, you can not only migrate to the latest release of vSphere, but you can also take your workload—lock, stock, and barrel—out of your data center and put it into Google Cloud. Moving to Google Cloud VMware Engine makes the migration task fast and simple. Never again will you have to deal with spreadsheets to track how many watts of cooling you need for your data center, buy additional equipment, or manage upgrades.

Migrating to the cloud is also the first step toward getting out of the business of managing your data center and into embracing an OpEx subscription model. And you can begin moving workloads to the cloud in increments, without having to worry about all the nuances — it’s all done for you.

Work in a familiar environment and expand your toolset

One of the biggest benefits of Google Cloud VMware Engine is that it offers the same, familiar VMware experience you have now. All the applications running on vSphere 5.5 can immediately run on a private cloud in Google Cloud VMware Engine with no changes. You’ll now be running on the latest release of vSphere 7, and when VMware releases patches, updates, and upgrades, Google Cloud keeps the infrastructure up to date for you. And as a VMware administrator, you can use the same tools that you’re familiar with on-premises.

Migration doesn’t have to be a long, arduous process

Google Cloud VMware Engine allows you to leverage your existing virtualized infrastructure to make migration fast and easy. Use familiar VMware tools to migrate your on-premises vSphere applications to vSphere in your own private cloud while maintaining continuity with all your existing tools, policies, and processes. It takes only a few clicks (see our demo video). Make sure you have your prerequisites, enable the Google Cloud VMware Engine API, and follow these 10 steps:

  1. Enable the VMware Engine node quota and assign at least three nodes to create your private cloud.

  2. Set your roles and permissions.

  3. Access the Google Cloud VMware Engine portal.

  4. Click ‘Create a private cloud’. This is fast — only about 30 minutes.

  5. Select the number of nodes (a minimum of three).

  6. Enter a CIDR range for the VMware management network.

  7. Enter a CIDR range for the HCX deployment network.

  8. Review your settings.

  9. Click Create.

  10. Connect an on-prem network to your VMware Engine private cloud or connect using a point-to-site VPN connection. Google Cloud VMware Engine supports multi-region networking with VPC global routing, which allows VPC subnets to be deployed in any region worldwide, greatly simplifying networking.

When you use VMware HCX to migrate VMs from your on-premises environment to Google Cloud VMware Engine, VMware HCX abstracts vSphere resources running both on-prem and in the cloud and presents them to applications as one continuous resource to create a hybrid infrastructure.

By partnering with Google Cloud, you can erase your tech debt and get out of the time-consuming, resource-draining business of data center management. Then, once your VMware-based workloads are running on Google Cloud VMware Engine, you can start modernizing your applications with Google Cloud services, including AI/ML, low-cost storage, and disaster recovery solutions. Check out the variety of pricing options for the service, from pre-pay with discounts up to 50% to pay-as-you-go and annual commitments.

Related Article

Zero-footprint DR solution with Google Cloud VMware Engine and Actifio

Learn how to use Actifio data management software plus Google Cloud VMware Engine to create a dynamic, low-cost DR site in the cloud.

Read Article

Commodore 64 Emulator in VR Delivers a Full 80s Experience

The content below is taken from the original ( Commodore 64 Emulator in VR Delivers a Full 80s Experience), to continue reading please visit the site. Remember to respect the Author & Copyright.

The simulated color CRT monitor looks surprisingly convincing in VR.

One way to play with vintage hardware without owning the hardware is to use an emulator, but [omni_shaNker] announced taking it to the next level by using VR to deliver a complete Commodore 64 system, in its full glory, complete with a native 80s habitat playset! This is a pretty interesting angle for simulating vintage hardware, especially since the emulator is paired with what looks like a pretty convincing CRT monitor effect in VR, not to mention a virtual 5.25″ floppy drive that makes compellingly authentic sounds.

The project is hosted on GitHub and supports a variety of VR hardware, but for owners of Oculus headsets, the application is also available on SideQuest for maximum convenience. SideQuest is essentially an off-the-books app store for managing software that is neither approved nor distributed by Facebook. Oculus is owned by Facebook, and Facebook is keen to keep a tight grip on their hardware.

As functional as the application is, there are still improvements and optimizations to be made. To address this, [omni_shaNker] put out a call for beta testers on Reddit, so if that’s up your alley be sure to get in touch. A video demonstration and overview that is chock-full of technical details is also embedded below; be sure to give it a watch to see what the project is all about.

Absolutely everything you need to go bikepacking: the complete guide to what you need to take

The content below is taken from the original ( Absolutely everything you need to go bikepacking: the complete guide to what you need to take), to continue reading please visit the site. Remember to respect the Author & Copyright.

Image: Cycling Weekly

Bikepacking is a wonderful way to spend a holiday or weekend. You get to know the area far more intimately that just staying in some accommodation or camping. Even travelling somewhere fairly local, you’ll experience a very different side of it on a bike than you ever would otherwise. Like with any holiday or bike […]

RISC OS Awards poll open for votes

The content below is taken from the original ( RISC OS Awards poll open for votes), to continue reading please visit the site. Remember to respect the Author & Copyright.

The annual RISC OS Awards poll, undertaken by RISCOSitory, is now open for votes. The voting form went live a little over a week ago, and should remain available until… Read more »

Major BGP leak disrupts thousands of networks globally

The content below is taken from the original ( Major BGP leak disrupts thousands of networks globally), to continue reading please visit the site. Remember to respect the Author & Copyright.

A large BGP routing leak that occurred last night disrupted the connectivity for thousands of major networks and websites around the world. Although the BGP routing leak occurred in Vodafone’s autonomous network (AS55410) based in India, it has impacted U.S. companies, including Google, according to sources. […]

Homebrew RISC-V Computer Has Beauty and Brains

The content below is taken from the original ( Homebrew RISC-V Computer Has Beauty and Brains), to continue reading please visit the site. Remember to respect the Author & Copyright.

Building your own CPU is arguably the best way to truly wrap your head around how all those ones and zeros get flung around inside of a computer, but as you can probably imagine even a relatively simple processor takes an incredible amount of time and patience to put together. Plus, more often than not you’re then left with a maze of wires and perfboards that takes up half your desk and doesn’t do a whole lot more than blink some LEDs.

An early prototype of the Pineapple ONE.

But the Pineapple ONE, built by [Filip Szkandera] isn’t your average homebrew computer. Oh sure, it still took two years for him to design, debug, and assemble, his 32-bit RISC-V CPU and all its associated hardware; but the end result is a gorgeous looking machine that runs C programs and offers a basic interactive shell over VGA. In fact with its slick 3D printed enclosure, vertically stacked construction, and modular peripheral connections, it looks more like some kind of high-tech scientific instrument than a computer; homebrew or otherwise.

[Filip] says he was inspired to build this 500 kHz (yes, kilohertz) beauty using only discrete logic components by [Ben Eater]’s well known 8-bit  breadboard computer and [David Baruch]’s LMARV-1 (Learn Me A RISC-V, version 1). He spent six months simulating the machine before he even started creating the schematics, let alone design the individual boards. He tried to keep all of his PCB’s under 100 x 100 mm to take advantage of discounts from the fabricator, which ultimately lead to the decision to align the nine boards vertically and connect them together with pin headers.

In the video below you can see [Filip] start up the computer, call up a bit of system information, and even play a rudimentary game of snake before peeking and poking some of the machine’s 512 kB of RAM. It sounds like there’s still some work to be done and bugs to squash, but we’ve already seen enough to say this machine has more than earned entry into the pantheon of master-crafted homebrew computers.

How to deploy Certificate Authority Service

The content below is taken from the original ( How to deploy Certificate Authority Service), to continue reading please visit the site. Remember to respect the Author & Copyright.

Certificate Authority Service (CAS) is a highly available, scalable Google Cloud service that enables you to simplify, automate, and customize the deployment, management, and security of private certificate authorities (CA). As it nears general availability, we want to provide guidance on how to deploy the service in real world scenarios. Today we’re releasing a whitepaper about CAS that explains exactly that. And if you want to learn more about how and why CAS was built, we have a paper on that too.

“How to deploy a secure and reliable public key infrastructure with Google Cloud Certificate Authority Service” (written by Mark Cooper of PKI Solutions and Anoosh Saboori of Google Cloud ) covers security and architectural recommendations for the use of the Google Cloud CAS by organizations, and describes critical concepts for securing and deploying a PKI based on CAS. 

The purpose of a public key infrastructure (PKI) to issue certificates is largely dependent on the environment in which the PKI-issued certificates will be used. For common internet-facing services, such as a website or host where visitors to the site are largely unknown to the host, a certificate that is trusted by the visitor is required to ensure a seamless validation of the host. If a visitor’s browser hasn’t been configured to trust the PKI from which the certificate was issued, an error will occur. To facilitate this process, publicly trusted certificate authorities issue certificates that can be broadly trusted throughout the world. However, their structure, identity requirements, certificate restrictions, and certificate cost make them ineffective for certificate needs within an organizational or private ecosystem, such as the internet of things (IoT) or DevOps. 

Organizations that have a need for internally trusted certificates and little to no need for externally trusted certificates can have more flexibility, control, and security in their certificates without a per-certificate charge from commercial providers. 

A private PKI can be configured to issue the certificates an organization needs for a wide range of use cases, and can be configured to do so on a large scale, automated basis. Additionally, an organization can be assured that externally issued certificates cannot be used to access or connect to organizational resources.

The Google Cloud Certificate Authority Service (CAS) allows organizations to establish, secure and operate their own private PKI. Certificates issued by CAS will be trusted only by the devices and services an organization configures to trust the PKI.

Here are our favorite quotes from the paper:

  • “CAS enables organizations to flexibly expand, integrate or establish a PKI for their needs. CAS can be used to establish and operate as an organization’s entire PKI or can be used to act as one or more CA components in the PKI along with on-premises or other CAs.”

  • “There are several architectures that could be implemented to achieve goals within your organization and your PKI: Cloud root CA, cloud issuing CA and others” 

  • “Providing a dispersed and highly available PKI for your organization can be greatly simplified through CAS Regionalization. When deploying your CA, you can easily specify the location of your CA.”

  • “CAS provides two operational service tiers for a CA – DevOps and Enterprise. These two tiers provide organizations with a balance of performance and security based on operational requirements.”

cas (1).jpg

Related Article

New whitepaper: Scaling certificate management with Certificate Authority Service

This whitepaper explains how organizations can more easily manage devices with Google Cloud’s Certificate Authority Service

Read Article

Remove ALL Saved Passwords at once in Chrome, Firefox and Edge browser

The content below is taken from the original ( Remove ALL Saved Passwords at once in Chrome, Firefox and Edge browser), to continue reading please visit the site. Remember to respect the Author & Copyright.

Whenever you visit a website that requires you to sign in, Firefox, Edge, and the Chrome browser offer you to save passwords to save time on frequently accessed sites. It helps you to avoid going through the unnecessarily sign-in process. However, saving passwords sometimes can disclose your hidden information when someone uses your computer. To […]

This article Remove ALL All Saved Passwords at once in Chrome, Firefox and Edge browser first appeared on TheWindowsClub.com.

What do different WD Hard Drive colors mean?

The content below is taken from the original ( What do different WD Hard Drive colors mean?), to continue reading please visit the site. Remember to respect the Author & Copyright.

If you have ever shopped for Western Digital Hard Disk Drives (WD HDD), you have seen that they are available in different color codes. This might create confusion in your mind about which type of WD HDD you have to buy. The different color WD HDDs are manufactured for different purposes. In this article, we […]

This article What do different WD Hard Drive colors mean? first appeared on TheWindowsClub.com.

Free AI and machine learning training for fraud detection, chatbots, and more

The content below is taken from the original ( Free AI and machine learning training for fraud detection, chatbots, and more), to continue reading please visit the site. Remember to respect the Author & Copyright.

Google Cloud is offering no-cost training opportunities to help you gain the latest AI and machine learning skills. You’ll have a chance to learn more about the new Document AI along with Explainable AI, Looker, BigQuery ML, and Dialogflow CX. 

Document AI

The new Document AI (DocAI) platform, a unified console for document processing, became available for preview in November. Join me on April 22 to learn how to set up the Document AI Platform, process sample documents in an AI Platform notebook, and use the Procurement DocAI solution to intelligently process your unstructured data or “dark data” such as PDFs, images and handwritten forms to reduce the manual overhead of your document workflows. Save your spot to learn about DocAI here. Can’t join on April 22? The training will be available on-demand after April 22. 

Explainable AI

Join Lak Lakshmanan, Google Cloud’s Director of Data Analytics and AI Solutions, on April 16 to explore Explainable AI, a set of tools and frameworks to help you understand and interpret predictions made by your machine learning models. Lak will go through a hands-on lab to teach you several canonical methods to explain predictions and illustrate how the Google Cloud AI Platform serves predictions and explanations. Reserve your seathereto learn about Explainable AI. Can’t join on April 16? The training will be available on-demand after April 16. 

Looker

Looker is a modern business intelligence (BI) and analytics platform that is now a part of Google Cloud. If you have no machine learning experience, we recommend you check out our technical deep dive to learn how to use Looker to automate the data pipeline building process and generate deeper data insights. Sign uphereto get access. 

BigQuery ML

BigQuery ML lets you create and execute machine learning models in BigQuery using standard SQL queries. 

In our “Real-time credit card fraud detection” webinar, you’ll learn how to build an end-to-end solution for real-time fraud detection. You’ll discover how trained models in BigQuery ML, predictions from Google Cloud’s AI Platform, streaming pipelines in Dataflow, notifications on Pub/Sub, and operational management in Data Studio can all come together to identify and fight credit card fraud. Registerhereto watch the webinar. 

To find out how to build demand forecasting models with BigQuery ML, sign uphere. In this webinar, a Google Cloud expert will walk through how to train, evaluate and forecast inventory demand on retail sales data. He’ll also demonstrate how to schedule model retraining on a regular basis so that your forecast models can stay up-to-date. 

Dialogflow CX

Understand the ins and outs of Dialogflow CX, lifelike conversational AI with virtual agents (chat and voice bots), when you register here. This webinar shows you the newest ways to build intelligent virtual agents. A Google Cloud expert will demonstrate how to get these agents ready for production and improve customer experience using analytics. She’ll also share best practices for deploying prebuilt agents.

We hope these training resources help you grow your AI and machine learning knowledge. Stay tuned for new learning opportunities throughout the year.

Related Article

Accelerate data science workflows with Looker, plus free training to help get started

Learn how data analysts and scientists can use Looker to help with data governance, join our training to walk through working examples of…

Read Article

Manage Microsoft’s 90-day license assignment rules with AWS License Manager

The content below is taken from the original ( Manage Microsoft’s 90-day license assignment rules with AWS License Manager), to continue reading please visit the site. Remember to respect the Author & Copyright.

AWS License Manager makes it easier to manage your software licenses across AWS and on-premises environments. AWS License Manager lets administrators create customized licensing rules that emulate the terms of their licensing agreements, apply these rules to keep track of licenses used, and control whether an Amazon Elastic Compute Cloud (Amazon EC2) instance should be launched.

In the case of Microsoft workloads such as Windows and SQL Server, you can use the simplified bring-your-own-license (BYOL) opportunities in AWS License Manager to effectively govern and automate the management of software licenses that require dedicated physical hardware.

In a BYOL scenario where licenses are not covered by Software Assurance with mobility rights, Microsoft requires customers to manage the 90-day license reassignment restriction, under which you cannot reassign a license to a new device unless 90 days have passed from the initial assignment. To help customers simplify managing Microsoft’s 90-day license reassignment restriction, AWS License Manager has added the ability to enforce license assignment rules with EC2 Dedicated Hosts. By using License Manager, you can ensure that you are compliant with the 90-day license assignment rule.

In this blog post, I show you how to create and enforce a 90-day license limit rule for SQL Server.

Prerequisites

  • An AWS account with permissions to access Amazon EC2, AWS Identity and Access Management (IAM), and AWS Systems Manager. If you do not have permissions, contact your security team.
  • An IAM instance profile for Systems Manager.
  • Instances launched as part of this blog post should have the IAM instance profile attached.

Walkthrough

Step 1: Create a license configuration

  1. Open the AWS Management Console, and under Management & Governance, open the AWS License Manager console.
  2. In the left navigation pane, choose Customer managed licenses, and then choose Create customer managed license.
The Customer managed licenses page lists BYOL licenses. It includes an Actions menu and a Create customer managed license button.

Figure 1: Customer managed licenses page

You will need the following details for your license configuration:

    • License configuration name: Identifies your license configuration and its resource associations. Enter a descriptive name (for example, SQL Server Enterprise licenses).
    • Description (optional): Enter brief information about the license.
    • License type: Choose Cores because you will be licensing the physical cores of the Dedicated Host.
    • Number of cores (optional): Indicate the number of licenses that you own. License Manager uses this information to help you manage the number of licenses used by your organization. In this example, I use 48, which is the number of physical cores in an r5 Dedicated Host.
    • Enforce license limit: Select this checkbox to limit licensing overuse based on the number of license types. License Manager blocks instance launches after available license types are exhausted.
    • Rules (optional):
      • Under Rule type, choose Tenancy and for Rule value, choose Dedicated Host. As mentioned in the introduction, Microsoft’s 90-day license reassignment restriction applies to BYOL on dedicated physical hardware.
      • Under Rule type, choose License affinity to host (in days) and for Rule value, enter 90 to enforce Microsoft’s 90-day license reassignment restriction.
    • Automated discovery rules (optional): Leave blank.

 

The fields of the Create license configuration page are completed as described in the body of the post.

Figure 2: Create license configuration page

  1. Choose Submit to create the license configuration.

Step 2: Associate an AMI (optional)

When you associate an Amazon Machine Image (AMI) with a license configuration, EC2 instances can be launched into the host resource group associated with that license configuration. A host resource group is a collection of Dedicated Hosts that you can manage as a single entity.

  1. Under Customer managed licenses, choose the license configuration you created in the previous step.
  2. From Actions, choose Associate AMI.
The Actions menu on the Customer managed license page displays options to edit, delete, deactivate, add automated discovery rules, associate AMI, and share with organizational accounts.

Figure 3: Associate AMI option in Actions menu

  1. Choose one or more AMIs and then choose Associate.

Step 3: Create a host resource group

The next step is to create a host resource group.

  1. From the left navigation pane, choose Host resource groups, and then choose Create host resource group.
The Host resource groups page displays a three-step sequence: create a license configuration, associate an AMI, and create a host resource group.

Figure 4: Host resource groups page

You will need the following details for your host resource group:

  • Host resource group name: Identifies your host resource group and its resource associations. Enter a descriptive name (for example, SQL_Server_group).
  • Description (optional): Enter brief information about the host resource group.
  • Share host resource group with all my member accounts. Leave this option cleared. When selected, it allows instances to be launched into the host resource group from member accounts.
  • EC2 Dedicated Host management settings
    • Allocate hosts automatically. Select this option, which allocates a new host if additional capacity is required to launch an instance.
    • Release hosts automatically. Select this option, which releases hosts when they no longer have an active instance running on them. This helps drive down costs.

Note: In the use case in this post, selecting this option releases the Dedicated Host as soon as the last instance on it is terminated, but licenses allocated to the Dedicated Host remain unavailable for reassignment until the 90-day limit has expired. AWS encourages customers to do their own due diligence when using the Release hosts automatically setting.

    • Recover hosts automatically. Select this option, which moves instances to a new host in case of unexpected host failures.

Note: Microsoft’s 90-day assignment rule does not apply if the reassignment is due to permanent hardware failure or loss. Because there is no definition of what constitutes a permanent hardware failure or loss in Microsoft’s license terms, AWS encourages customers to do their own due diligence when using the Recover hosts automatically setting with license affinity rules.

  • Instance families: Because this example is for SQL Server, I use the memory-optimized r5 family.
  • Associated license configurations: Choose the license configuration you created earlier.
  1. Choose Create to create your host resource group.
The fields on the Create host resource group page are completed as described in the body of the post.

Figure 5: Create host resource group page

Step 4: Launch instances into the host resource group

Now use a launch template to launch an r5.xlarge instance into the host resource group. For information about how to create launch templates, see Launch an instance from a launch template in the Amazon EC2 User Guide for Linux Instances.

Launching an instance through the launch template provisions the Dedicated Host in the SQL_Server_group. Figure 6 shows that the license configuration you created is now tracking Core usage.

The SQL_Server_group details page displays host resource group settings, license configurations set, and Dedicated Hosts.

Figure 6: SQL_Server_group

Choose the Dedicated Host ID. Figure 7 shows that the r5.xlarge instance has been launched and is operational.

The Dedicated Host details include ID, state, name, physical cores, Availability Zone, instance family, and more.

Figure 7: Dedicated Host details

In the AWS License Manager console, choose Customer managed licenses to verify that licenses are being tracked and all 48 cores configured in step 1 are assigned and consumed.

On Customer managed licenses, the SQL Server Enterprise licenses show a status of Active, a license type of Core, and 48 of 48 licenses consumed.

Figure 8: SQL Server Enterprise licenses

The AWS License Manager dashboard shows that a license limit is enforced, as configured in step 1. This means that you can launch r5 instances in the host resource group as long as they fit into the one Dedicated Host. AWS License Manager will block the launch of any additional hosts because the license limit has been reached.

In the Overview section of the dashboard, there are 3 customer managed license and 0 granted licenses. Under Customer managed licenses usage, the license limit (48) has been reached.

Figure 9: AWS License Manager Dashboard

Step 5: Release the Dedicated Host and confirm 90-day limit assignment rule is enforced

Terminate the r5.xlarge instance on the provisioned Dedicated Host. Because you have created the host resource group with the Release hosts automatically setting, AWS License Manager releases the host after all instances on that host are terminated. You can also release the host manually.

Figure 10 shows that after the instances are terminated, the Dedicated Host’s utilization drops to 0%.

In Dedicated Hosts, under vCPU utilization, 0/96 is displayed.

Figure 10: Dedicated Hosts page

Shortly after the host is released, it no longer appears in the host resource group.

On the SQL_Server_group details page, under Dedicated Hosts, no Dedicated Hosts are provisioned.

Figure 11: Dedicated Hosts for SQL_Server_group

The 48 Core licenses assigned to that host remain reserved. They are unavailable for reuse elsewhere for 90 days from the time the Dedicated Host was launched.

Figure 12 shows this information is also displayed in the dashboard.

The Dashboard page shows no change to the Core licenses in use.

Figure 12: Cores in use in the dashboard

Cleanup

  • From the EC2 console, delete the launch template you created in step 4.
  • From the AWS License Manager console, delete the host resource group you created in step 3.
  • From the AWS License Manager console, click on Customer managed licenses, then click on the license configuration you created in step 1, and under Associated AMIs click on the check box next to the AMI ID and disassociate the AMI
  • From the AWS License Manager console, click on Customer managed licenses and delete the License Configuration you created in step 1.
  • Delete the IAM instance profile for Systems Manager created as part of the prerequisites.

Conclusion

In this blog post, I showed how AWS License Manager helps you manage and enforce the 90-day reassignment limit for Microsoft licenses. For more information about AWS License Manager, see the AWS License Manager User Guide.

About the Author

Andreas is a Senior Specialist Solutions Architect for Microsoft workloads at Amazon Web Services. He has more than 20 years of experience on Microsoft technologies, Systems Security, Identity, Datacenter Migrations, and Enterprise Architecture. He is passionate about helping customers migrate to, and realize the full benefits of the cloud.

A Crash Course on Sniffing Bluetooth Low Energy

The content below is taken from the original ( A Crash Course on Sniffing Bluetooth Low Energy), to continue reading please visit the site. Remember to respect the Author & Copyright.

Bluetooth Low Energy (BLE) is everywhere these days. If you fire up a scanner on your phone and walk around the neighborhood, we’d be willing to bet you’d pick up dozens if not hundreds of devices. By extension, from fitness bands to light bulbs, it’s equally likely that you’re going to want to talk to some of these BLE gadgets at some point. But how?

Well, watching this three part video series from [Stuart Patterson] would be a good start. He covers how to get a cheap nRF52480 BLE dongle configured for sniffing, pulling the packets out of the air with Wireshark, and perhaps most crucially, how to duplicate the commands coming from a device’s companion application on the ESP32.

Testing out the sniffed commands.

The first video in the series is focused on getting a Windows box setup for BLE sniffing, so readers who aren’t currently living under Microsoft’s boot heel may want to skip ahead to the second installment. That’s where things really start heating up, as [Stuart] demonstrates how you can intercept commands being sent to the target device.

It’s worth noting that little attempt is made to actually decode what the commands mean. In this particular application, it’s enough to simply replay the commands using the ESP32’s BLE hardware, which is explained in the third video. Obviously this technique might not work on more advanced devices, but it should still give you a solid base to work from.

In the end, [Stuart] takes an LED lamp that could only be controlled with a smartphone application and turns it into something he can talk to on his own terms. Once the ESP32 can send commands to the lamp, it only takes a bit more code to spin up a web interface or REST API so you can control the device from your computer or other gadget on the network. While naturally the finer points will differ, this same overall workflow should allow you to get control of whatever BLE gizmo you’ve got your eye on.

Announcing antivirus in Cloudflare Gateway

The content below is taken from the original ( Announcing antivirus in Cloudflare Gateway), to continue reading please visit the site. Remember to respect the Author & Copyright.

Announcing antivirus in Cloudflare Gateway

Today we’re announcing support for malware detection and prevention directly from the Cloudflare edge, giving Gateway users an additional line of defense against security threats.

Cloudflare Gateway protects employees and data from threats on the Internet, and it does so without sacrificing performance for security. Instead of backhauling traffic to a central location, Gateway customers connect to one of Cloudflare’s data centers in 200 cities around the world where our network can apply content and security policies to protect their Internet-bound traffic.

Announcing antivirus in Cloudflare Gateway

Last year, Gateway expanded from a secure DNS filtering solution to a full Secure Web Gateway capable of protecting every user’s HTTP traffic as well. This enables admins to detect and block not only threats at the DNS layer, but malicious URLs and undesired file types as well. Moreover, admins now have the ability to create high-impact, company-wide policies that protect all users with one click, or they can create more granular rules based on user identity.

Earlier this month, we launched application policies in Cloudflare Gateway to make it easier for administrators to block specific web applications. With this feature, administrators can block those applications commonly used to distribute malware, such as public cloud file storage.

These features in Gateway enable a layered approach to security. With Gateway’s DNS filtering, customers are protected from threats that abuse the DNS protocol for the purposes of communicating with a C2 server, downloading an implant payload, or exfiltrating corporate data. DNS filtering applies to all applications generating DNS queries, and HTTP traffic inspection complements that by going deep on threats that users might encounter as they navigate the Internet.

Today, we are excited to announce another layer of defense with the addition of antivirus protection in Cloudflare Gateway. Now administrators can block malware and other malicious files from being downloaded onto corporate devices as they pass through Cloudflare’s edge for file inspection.

Stopping malware distribution

Protecting corporate infrastructure and devices from becoming infected with malware in the first place is one of the top priorities for IT admins. Malware can wreak a wide range of havoc: business operations may be crippled by ransomware, sensitive data may be exfiltrated by spyware, or local CPU resources may be siphoned for financial gain by cryptojacking malware.

In order to compromise a network, malicious actors commonly attempt to distribute malware through an email attachment or malicious link sent via email. More recently, in order to evade email security, threat actors are beginning to leverage other communication channels, such as SMS, voice, and support ticket software for malware distribution.

The devastating impact of malware, coupled with the large attack surface for potential compromise, makes malware prevention a top-of-mind concern for security teams.

Defense in Depth

No single tool or approach provides perfect security, necessitating a layered defense against threats that make their way past these different tools. Not all threats are previously known to threat researchers, requiring admins to fall back on additional inspection tools once a user successfully connects to a site containing potentially malicious content.

Highly sophisticated threats may make their way into a user’s network and the primary task for security teams is to quickly determine the scope of the attack against their organization. In these worst case scenarios, where a user accesses a domain, website, or file that is deemed malicious, the last line of defense for a security team is achieving a clear understanding of the source of the attack against their organization and what resources were affected.

Announcing File Scanning

Today, with Cloudflare Gateway, you can augment your endpoint protection and prevent malicious files from being downloaded onto employee devices. Gateway will scan files inbound from the Internet as they pass through the Cloudflare edge at the nearest data center. Cloudflare manages this layer of defense for customers the same as it manages intelligence used for DNS and HTTP traffic filtering, freeing admins from purchasing additional antivirus licenses or worrying about keeping virus definitions up to date.

Announcing antivirus in Cloudflare Gateway

When a user initiates a download and that file passes through Gateway at Cloudflare’s edge, that file is sent to the malware scanning engine. This engine contains malware sample definitions and is updated on a daily basis. When Gateway scans a file and detects the presence of malware, it will block the file transfer by resetting the connection which is then displayed to the user in their browser as a download error. Gateway also logs the URL where the file was downloaded, the SHA-256 hash of the file, and the fact that the file was blocked due to the presence of malware.

A common approach to security is to “assume breach.” This assumption by security teams acknowledges that not all threats are previously known and optimizes for responding to threats quickly. With Gateway, administrators have complete visibility over the impact the threat had on their organization by leveraging Gateway’s centralized logging, providing clear steps for threat remediation as part of an incident response.

Detecting malware post-compromise

When using an “assume breach” approach, security teams rely on surfacing actionable insights from all available information around an attack. A more sophisticated attack might unfold this way:

  • After exploiting a user’s system through any number of means (leading to the “assume breach” approach), a stage 0 implant (or dropper) is placed on the exploited device.
  • This file may be complete or need additional pieces of a larger implant, and sends a DNS query to a domain previously unknown to threat research as being associated with C2 for an attack campaign.
  • The response to the query to the C2 server encodes information indicating where the implant can download additional components of the implant.
  • The implant uses DNS tunneling to a different domain, also unknown to threat research as being malicious, to download additional components of the implant.
  • The fully constructed implant performs any number of tasks assigned by another C2 server. These include exfiltrating local files, moving laterally in the network, encrypting all the files on the local machine, or even using the local CPU for the purpose of mining cryptocurrency.

Cloudflare Gateway goes beyond simply detecting and blocking queries to domains previously known to be associated with C2, DNS tunneling, or that appear to be generated by a Domain Generation Algorithm (DGA). Gateway uses heuristics from threat research to identify queries that appear to be generated by a DGA for the purposes of an attack outlined above, detects these previously unknown threats from an organization’s log data, and proactively blocks them before a security admin needs to manually intervene.

Threat research is continually evolving. Cloudflare Gateway takes the burden of keeping pace with security threats off IT admins by delivering insights derived from Cloudflare’s network to protect organizations of any size anywhere they are.

What’s Next

Our goal is to provide sophisticated, but easy to implement, security capabilities to organizations regardless of size so they can get back to what matters to their business. We’re excited to continue to expand Gateway’s capabilities to protect users and their data. DNS tunneling and DGA detection is included in Gateway DNS filtering at no cost for teams up to 50 users. In-line detection of malware at Cloudflare’s edge will be included with Teams Standard and Teams Enterprise plans.

Stay tuned for filtering at the network level and integration with GRE tunnels — we’re just getting started. Follow this link to sign up today.

Getting Compute Engine resources for batch processing just got easier

The content below is taken from the original ( Getting Compute Engine resources for batch processing just got easier), to continue reading please visit the site. Remember to respect the Author & Copyright.

If you need to run an embarrassingly parallel batch processing workload, it can be tricky to decide how many instances to create in each zone while accounting for available resources, quota limits and your reservations. We are excited to announce a new method of obtaining Compute Engine instances for batch processing that accounts for availability of resources in zones of a region. Now available in preview for regional managed instance groups, you can do this simply by specifying the ANY value in the API.

The capacity-aware deployment method is particularly useful if you need to easily create many instances with a special configuration such as virtual machines (VM) with a specific CPU platform or GPU model, preemptible VMs, or instances with a large number of cores or memory size.

Now, when deploying instances to run embarrassingly parallel batch processing, such as financial modeling or rendering, you no longer have to figure out which zones support the required hardware and how many instances to create in each zone in a region to accommodate the requested capacity.

Assuming that any distribution of instances across zones works for your batch processing job, and that the workload doesn’t require resilience against zone-level failure, you can now delegate the job of obtaining the requested capacity to a regional managed instance group. A regional MIG with the new distribution shape ANY automatically deploys instances to zones where resources are available to fulfill your request, accounting for your quota limits. This works both when you create a group or when you increase it in size.

If you use reservations to ensure that resources are available for your computation, you should specify reservation affinity in a group’s instance template. A regional MIG with distribution shape ANY utilizes the specified reservations efficiently by prioritizing consumption of unused reserved capacity before provisioning additional resources.

A regional MIG.jpg
A regional MIG with distribution shape ANY automatically deploys instances to zones where capacity is available, takes quotas into account, and prioritizes consumption of a specified reservation.

Depending on the availability of the requested resources, a regional MIG with ANY distribution might deploy all instances to a single zone or spread the instances across multiple zones. The distribution shape ANY is not suitable for highly available serving workloads such as frontend web services because a zone-level failure could result in all or most of the instances becoming unavailable if they happen to be deployed to the zone that failed. 

Getting started

To configure the new distribution shape ANY when creating a regional MIG, look under the “Target distribution shape” setting on Create instance group screen in the Google Cloud Console:

Configuring a regional managed instance.jpg
Configuring a regional managed instance group’s distribution shape

You can also set the distribution shape to ANY for an existing regional MIG—for example, by running a gcloud command:

Summing it up

Obtaining capacity to run an embarrassingly parallel batch processing workload is easier with a regional MIG’s new distribution shape ANY. When deciding how many instances to create in each zone, the regional MIG accounts for availability of resources in each zone, accounts for your quota limits, and prioritizes consumption of specified reservations.

Visit the regional MIG documentation to learn more about creating instances using the new distribution shape ANY.

Continue your learning at the Cloud Technical Series digital event, March 23-26, and go deeper into VM migration, application modernization, GKE, data analytics, AI/ML and more. Register here.

Related Article

Introducing HPC VM images—pre-tuned for optimal performance

Google Cloud’s first pre-configured HPC VM image is a CentOS 7-based image optimized for tightly-coupled MPI workloads.

Read Article

Getting Started With FreeRTOS And ChibiOS

The content below is taken from the original ( Getting Started With FreeRTOS And ChibiOS), to continue reading please visit the site. Remember to respect the Author & Copyright.

Free RTOS

If operating systems weren’t so useful, we would not be running them on every single of our desktop systems. In the same vein, embedded operating systems provide similar functionality as these desktop OSes, while targeting a more specialized market. Some of these are adapted versions of desktop OSes (e.g. Yocto Linux), whereas others are built up from the ground up for embedded applications, like VxWorks and QNX. Few of those OSes can run on a microcontroller (MCU), however. When you need to run an OS on something like an 8-bit AVR or 32-bit Cortex-M MCU, you need something smaller.

Something like ChibiOS (‘Chibi’ meaning ‘small’ in Japanese), or FreeRTOS (here no points for originality). Perhaps more accurately, FreeRTOS could be summarized as a multi-threading framework targeting low-powered systems, whereas ChibiOS is more of a full-featured OS, including a hardware abstraction layer (HAL) and other niceties.

In this article we’ll take a more in-depth look at these two OSes, to see what benefits they bring.

Basic feature set

FreeRTOS supports a few dozen microcontroller platforms, the most noticeable probably being AVR, x86 and ARM (Cortex-M & Cortex-A). In contrast, ChibiOS/RT runs on perhaps fewer platforms, but comes with a HAL that abstracts away hardware devices including I2C, CAN, ADC, RTC, SPI and USB peripherals. Both of them offer a preemptive multi-tasking scheduler with priority levels and multi-threading primitives, including mutexes, condition variables and semaphores.

The corollary of this comparison then seems to be that FreeRTOS is good for basic multi-threading features, whereas ChibiOS/RT offers a more holistic approach through its HAL. The presence of the HAL also means that one can theoretically target ChibiOS/RT and recompile the same code for different MCU platforms. For FreeRTOS one would still have to use another framework to use hardware peripherals, whether this would be CMSIS, ST’s HAL or something else, and this decreases portability.

In the next sections we’ll be working through a basic example for each of these two OSes, to gain a deeper understanding of what developing with them is like.

FreeRTOS with CMSIS-RTOS

A HTML page, served from an STM32F746ZG MCU.

For a simple example of how to work with FreeRTOS, the HTTP server example by ST for the Nucleo-746ZG STM32 development board is a good start. I have also made a self-contained version with all dependencies and a Makefile for use with the Arm Cortex-M GCC toolchain.

This example project demonstrates how to combine the CMSIS-RTOS API, FreeRTOS and the LwIP networking stack to create a Netconn-based HTTP server which can serve documents and images. The Netconn API of LwIP is a higher-level API than the raw API, which makes it the preferred choice if one has no special needs when it comes to networking.

The entry point of the demo project is found in Core/Src/Main.cpp. Its purpose is mostly to set up the firmware: configure peripherals and clocks, then initialize the first threads. Here we see not the syntax for FreeRTOS threads (tasks) being used, but that of CMSIS-RTOS, e.g.:

osThreadDef(Start, StartThread, osPriorityNormal, 0, configMINIMAL_STACK_SIZE * 5);
osThreadCreate (osThread(Start), NULL);

static void StartThread(void const * argument)
{
    /* .. */ 
}

CMSIS-RTOS is part of the Cortex Microcontroller Software Interface Standard, or CMSIS for short. It is a vendor-independent hardware abstraction layer (HAL) for Arm Cortex-based MCUs. In the case of embedded RTOSes, CMSIS provides the CMSIS-RTOS specification, which allows software to be written for a generic RTOS API and thus made portable across Cortex-M (and Cortex-A). Each supported RTOS then provides a CMSIS-RTOS implementation that maps the two sets of API calls.

In this example we are using the more basic CMSIS-RTOS v1 API with FreeRTOS. For newer MCUs with ARMv8 support as well as multi-core and Cortex-A, the RTOS v2 interface is a better match. The RTOS v2 interface is also supported by FreeRTOS, and the necessary files for this are found under Middlewares/Third_Party/FreeRTOS/Source/CMSIS_RTOS_V2, next to the folder with files for RTOS v1.

In the code snippet from earlier we saw how a new thread gets created. At the moment when we created the Start thread, there was however no scheduler running yet. We start this with a call to osKernelStart(). After this the code in the startThread function is scheduled and executed. Not surprisingly, this function starts the main threads which will form our HTTP server:

static void StartThread(void const * argument)
{ 
  /* Create tcp_ip stack thread */
  tcpip_init(NULL, NULL);
  
  /* Initialize the LwIP stack */
  Netif_Config();
  
  /* Initialize webserver demo */
  http_server_netconn_init();
  
  /* Notify user about the network interface config */
  User_notification(&gnetif);
  
#ifdef USE_DHCP
  /* Start DHCPClient */
  osThreadDef(DHCP, DHCP_thread, osPriorityBelowNormal, 0, configMINIMAL_STACK_SIZE * 2);
  osThreadCreate (osThread(DHCP), &gnetif);
#endif

  for( ;; )
  {
    /* Delete the Init Thread */ 
    osThreadTerminate(NULL);
  }
}

We first call tcpip_init(), which creates the LwIP TCP/IP processing thread (tcpip_thread). Netif_Config() is the network interface configuration function in our code. It calls the LwIP NETIF functions netif_add() and netif_default() to add and set our new network interface as the default. With this, we have LwIP fully

The http_server_netconn_init() function is found in httpserver-netconn.c. It creates a new thread called HTTP, which runs the code in http_server_netconn_thread. This sets up the server socket on port 80 and waits for new incoming connections. These are then handled by the http_server_serve() function, which is a simple if/else block for parsing HTTP requests and serving either the static file content (hard-coded in byte arrays), or showing the dynamic information (for a /STM32F7xxTASKS.html request) generated by DynWebPage():

void DynWebPage(struct netconn *conn)
{
  portCHAR PAGE_BODY[512];
  portCHAR pagehits[10] = {0};

  memset(PAGE_BODY, 0,512);

  /* Update the hit count */
  nPageHits++;
  sprintf(pagehits, "%d", (int)nPageHits);
  strcat(PAGE_BODY, pagehits);
  strcat((char *)PAGE_BODY, "


<pre>
Name          State  Priority  Stack   Num" );
  strcat((char *)PAGE_BODY, "
---------------------------------------------
");
    
  /* The list of tasks and their status */
  osThreadList((unsigned char *)(PAGE_BODY + strlen(PAGE_BODY)));
  strcat((char *)PAGE_BODY, "

---------------------------------------------");
  strcat((char *)PAGE_BODY, "
B : Blocked, R : Ready, D : Deleted, S : Suspended
");

  /* Send the dynamically generated page */
  netconn_write(conn, PAGE_START, strlen((char*)PAGE_START), NETCONN_COPY);
  netconn_write(conn, PAGE_BODY, strlen(PAGE_BODY), NETCONN_COPY);
}

The interesting part about this function is that it gives an insight in the active threads, as obtained from a call to osThreadList(). Although not an official part of the v1 CMSIS-RTOS API, it does provide useful functionality. This does however show that although the CMSIS-RTOS HAL is useful, it is imperfect and may not by default cover more exotic use cases, or fail to expose APIs from the underlying OS.

A Nucleo-F746ZG development board.

That aside for now, the rest of the StartThread() function holds few surprises: the User_notification() function (found in app_ethernet.c) sets the LEDs on the Nucleo development board to indicate the connection status. If we have enabled DHCP support, a thread is created for this as well, using DHCP_thread() from that same source file. The DHCP thread tries to obtain an IP address using the DHCP functionality in LwIP and set this for the interface which we created earlier.

At this point we can compile the project. Assuming we have obtained the arm-none-eabi GCC toolchain via the Arm download page or via our operating system’s package manager so that the compiler is on the system path, compiling the Makefile-based project can be done with a simple call to make. Flashing to a Nucleo-746ZG development board requires that OpenOCD is installed, after which a simple make flash with the board connected suffices.

Chibi: Perhaps not so small

Developing with ChibiStudio.

As alluded to earlier, ChibiOS is (ironically) a lot larger than FreeRTOS in terms of its feature set. This becomes also apparent when it comes to simply getting started with a new ChibiOS project. Whereas FreeRTOS as we saw earlier can comfortably be just the RTOS within a HAL like CMSIS-RTOS, ChibiOS has a lot of functionality which is not covered by that API. For this reason, the ChibiOS project has its own (Eclipse-based) IDE in the form of ChibiStudio, which comes with demo projects preinstalled.

On the website Play Embedded, a large number of tutorials and articles on ChibiOS can be found, such as this introduction article, which also covers getting started with ChibiStudio. ChibiOS’s complexity also shows in the configuration files, which include:

  • chconf.h, for configuring kernel options.
  • halconf.h, for configuring the HAL.
  • mcuconf.h, containing information pertaining to the specific MCU that is being targeted.

The ‘Blinky’ example project as provided with the ChibiOS download package for the STM32F042K6 MCU (as found on the ST Nucleo-F042K6 board) gives a pretty solid overview of what a ChibiOS project looks like. Of note here is the use of the ChibiOS/HAL module, which allows for the use of the UART2 peripheral, using ChibiOS’s serial driver.

Retargeting the code to another MCU should be a matter of updating the configuration files and recompiling, though one gets the impression that this is meant to be done via the IDE, and not so much by hand. The integration with other IDEs does not appear to be a thing either, from a cursory look. This would likely mean becoming very cozy with the Doxygen-generated documentation and other information out there.

At the same time, ChibiOS does support CMSIS-RTOS, and also offers two different kernels: the RT (real-time) one, and NIL, which is basically just trying to be as small as possible in terms of code size. This trade-off doesn’t appear to affect performance too much if their benchmarks are to be believed, making it an interesting option for a new embedded (RT)OS project.

Wrapping up

In this article we had a look at some of the things which one will encounter when deciding to develop using either FreeRTOS or ChibiOS. While both have their strong and weak points, the main point which one should take away is that they’re two very different beasts when it comes down to it. Both in the features which they provide, and the needs they target.

If one already uses CMSIS, then slotting in FreeRTOS is simple and straight-forward, allowing one to use other CMSIS-targeting code out there with few if any changes. ChibiOS on the other hand is more its own thing, which isn’t necessary a negative. Maybe it’s most helpful to look at FreeRTOS as a helpful module one can bolt onto CMSIS and other frameworks to add multi-threading support, whereas ChibiOS is more akin to NuttX and Zephyr, as a one-stop solution for all your needs.

Azure Remote Rendering is now generally available

The content below is taken from the original ( Azure Remote Rendering is now generally available), to continue reading please visit the site. Remember to respect the Author & Copyright.

Build immersive mixed reality experiences with Azure Remote Rendering, now generally available. Use it to enable high fidelity 3D visualization of objects and view models with a billion or more polygons without decimation.

World’s First eVTOL Airport Will Land This November

The content below is taken from the original ( World’s First eVTOL Airport Will Land This November), to continue reading please visit the site. Remember to respect the Author & Copyright.

We have to admit that flying cars still sound pretty cool. But if we’re ever going to get this idea off the ground, there’s a truckload of harsh realities that must be faced head-on. The most obvious and pressing issue might seem to be the lack of flying cars, but that’s not really a problem. Air taxis are already in the works from companies like Airbus, Rolls-Royce, and Cadillac, who premiered theirs at CES this year.

Where we’re going, we don’t need roads. But we do need infrastructure to support this growing category of air traffic that includes shipping drones that are already in flight. Say no more, because by November 2021, the first airport built especially for flying cars is slated to be operational in England.

Image via Hyundai

British startup Urban Air Port is building their flagship eVTOL hub smack dab in the center of Coventry, UK, a city once known as Britain’s Detroit due to the dozens of automobile makers who have called it home. They’re calling this grounded flying saucer-looking thing Air One, and they are building it in partnership with Hyundai thanks to a £1.2 million ($1.65M) grant from the British government. Hyundai are developing their own eVTOL which they are planning to release in 2028.

Starting in November 2021, this temporary, pop-up eVTOL hub will used to give live demonstrations that show the viability of these electric air vehicles for transporting both passengers and goods on a regular basis, as well as in heightened response to natural disasters. The hubs themselves will be small — 60% smaller than a heliport, which is their closest living cousin. They require no runway, and can be powered completely off-grid if necessary. Urban Air Port expects to be able to stand up one of these facilities in a matter of days, which makes them ideal for getting supplies into disaster-stricken areas of the world.

Even though the overall footprint will be smaller, these hubs will still need parking lots, bus stops, and other support for ground transportation. Fortunately, this is a whole-future endeavor and the hub is designed to be harmonious with other sustainable modes of electric transport. We’re picturing an EV charger in every parking space, all of which are shaded beneath a roof covered with responsive solar panels. Oh, and there’s a really nice bus stop.

If You Build It, They Will Come

On the one hand, it totally makes sense to start building these hubs. Again, you have to start somewhere, and I know I would feel a lot better about getting into an air taxi after a bit of front-row education. Like Urban Air Port founder Ricky Sandhu says in the video below, cars need roads, trains need rails, and planes need airports. And they all need places for parking, embarking, and disembarking. Air taxis and shipping drones need places for people and goods to load in and load out of them. From the looks of it, these hubs are more than just storage and a launch pad; they’re more akin to, well, small urban airports or helipads with amenities like couches and restrooms and maybe a vending with masks and sanitizer.

On the other hand, we still have a global pandemic going on that has changed the way we work, shop, and do just about everything. There’s barely anyone using regular airplanes these days. We have to wonder how much use these near-future air taxis would get, what with way fewer people actually going to a job, and no drive-through coffee options in the sky as of yet.

Urban-Air is planning to build 200 of these hubs across the UK and abroad within the next five years. We’re excited to see where project this goes — how many hubs end up getting built, and where. NASA thinks the Urban Air Mobility market could be worth a lot in the States, but cites the current lack of infrastructure as a major barrier. Don’t tell that to Archer Aviation, a California start-up that plans to launch a fleet of air taxis as early as 2024.

Office 365 Migration IMAP Folder Update Script

The content below is taken from the original ( Office 365 Migration IMAP Folder Update Script), to continue reading please visit the site. Remember to respect the Author & Copyright.

Today I had to manually migrate about 30 emails to Office 365. After the emails were migrated, I was manually updating each of the IMAP folders to get the emails within to show until I found a handy script.

If any of you run into the same issue of user’s email within their IMAP folders hiding themselves after a migration, this script might prove useful!

https://www.howto-outlook.com/howto/fix-imported-imap-folders.htm

submitted by /u/MountainSubie to r/msp
[link] [comments]

21 Google Cloud tools, each explained in under 2 minutes

The content below is taken from the original ( 21 Google Cloud tools, each explained in under 2 minutes), to continue reading please visit the site. Remember to respect the Author & Copyright.

Need a quick overview of Google Cloud core technologies? Quickly learn these 21 Google Cloud products—each explained in under two minutes.

1. BigQuery in a minute

Storing and querying massive datasets can be time consuming and expensive without the right infrastructure. This video gives you an overview of BigQuery, Google’s fully-managed data warehouse. Watch to learn how to ingest, store, analyze, and visualize big data with ease.

Storing and querying massive datasets can be time consuming and expensive without the right infrastructure. In this episode of Cloud Bytes, we give you an overview of BigQuery, Google’s fully-managed data warehouse. Watch to learn how to ingest, store, analyze, and visualize big data with ease!

BigQuery in a minute

2. Filestore in a minute

Filestore is a managed file storage service that provides a consistent view of your file system data and steady performance over time. In this video, we give you an overview of Filestore, showing you what it does and how you can use it for your developer projects.

Filestore is a managed file storage service that provides a consistent view of your file system data and steady performance over time. In this video, we give you an overview of Filestore, showing you what it does and how you can use it for your developer projects.

Filestore in a minute

3. Local SSD in a minute

Need a tool that gives you extra storage for your VM instances? This video explains what a Local SSD is and the different use cases for it. Watch to learn if this ephemeral storage option fits best with your developer projects.

Need a tool that gives you extra storage for your VM instances? This video explains what a Local SSD is and the different use cases for it. Watch to learn if this ephemeral storage option fits best with your developer projects.

Local SSD in a minute

4. Persistent Disk in a minute

What are persistent disks? How can they help when working with virtual machines? This video gives you a snackable synopsis of what Persistent Disk is and how you can use it as an affordable, reliable way to store and manage the data for your virtual machines.

What are persistent disks? How can they help when working with virtual machines? This video gives you a snackable synopsis of what Persistent Disk is and how you can use it as an affordable, reliable way to store and manage the data for your virtual machines.

Persistent Disk in a minute

5. Cloud Storage in a minute

Managing file storage for applications can be complex, but it doesn’t have to be. In this video, learn how Cloud Storage allows enterprises and developers alike to store and access their data seamlessly without compromising security or hindering scalability. 

Managing file storage for applications can be complex, but it doesn’t have to be. In this video, learn how Cloud Storage allows enterprises and developers alike to store and access their data seamlessly without compromising security or hindering scalability.

Cloud Storage in a minute

6. Anthos in a minute

Modernizing your applications while keeping complexity to a minimum is no easy feat. In this video, learn why Anthos is a great platform for providing greater observability, managing configurations, and securing multi and hybrid cloud applications.

Modernizing your applications while keeping complexity to a minimum is no easy feat. In this video, learn why Anthos is a great platform for providing greater observability, managing configurations, and securing multi and hybrid cloud applications.

Anthos in a minute

7. Google Kubernetes Engine in a minute

In this video, watch and learn how Google Kubernetes Engine (GKE), our managed environment for deploying, managing, and scaling containerized applications, can increase developer productivity, simplify platform operations, and provide greater observability.

In this video, watch and learn how Google Kubernetes Engine (GKE), our managed environment for deploying, managing, and scaling containerized applications using Google infrastructure, can increase developer productivity, simplify platform operations, and provide greater observability.

Google Kubernetes Engine in a minute

8. Compute Engine in a minute

How do you migrate existing VM workloads to the cloud? In this video, get a quick overview of Compute Engine and how it can help you seamlessly migrate your workloads.

How do you migrate existing VM workloads to the cloud? In this video, get a quick overview of Compute Engine and how it can help you seamlessly migrate your workloads to the Cloud.

Compute Engine in a minute

9. Cloud Run in a minute

What is Cloud Run? How does it help you build apps? In this video, get an overview of Cloud Run, a fully managed serverless platform that allows you to easily create applications seamlessly. Watch to learn how you can use Cloud Run for your developer projects.

What is Cloud Run? How does it help you build apps? In this video, get an overview of Cloud Run, a fully managed serverless platform that allows you to easily create applications seamlessly. Watch to learn how you can use Cloud Run for your developer projects.

Cloud Run in a minute

10. App Engine in a minute

Learn how this serverless application platform allows you to write your code in any supported language, run custom containers with the framework of your choice, and easily deploy and run your code in the cloud. 

Learn how this serverless application platform allows you to write your code in any supported language, run custom containers with the framework of your choice, and easily deploy and run your code in the cloud.

App Engine in a minute

11. Cloud Functions in a minute

Get a quick overview of Cloud Functions, our scalable pay-as-you-go functions as a service (FaaS) to run your code with zero server management.

Get a quick overview of Cloud Functions, our scalable pay-as-you-go functions as a service (FaaS) to run your code with zero server management.

Cloud Functions in a minute

12. Firestore in a minute

Cloud Firestore is a NoSQL document database that lets you easily store, sync, and query data for your mobile and web apps, at global scale. In this video, learn how use Firestore and discover features that simplify app development without compromising security.

Cloud Firestore is a NoSQL document database that lets you easily store, sync, and query data for your mobile and web apps, at global scale. In this video, we show you how to use Firestore and showcase its features that simplify app development without compromising security.

Firestore in a minute

13. Cloud Spanner in a minute

Cloud Spanner is a fully managed relational database with unlimited scale, strong consistency, and up to 99.999% availability. In this video, you’ll learn how Cloud Spanner can help you create time-sensitive, mission critical applications at scale.

Cloud Spanner is a fully managed relational database with unlimited scale, strong consistency, and up to 99.999% availability. In this video, you'll learn how Cloud Spanner can help you create time-sensitive, mission critical applications at scale.

Cloud Spanner in a minute

14. Cloud SQL in a minute

Cloud SQL is a fully-managed database service that helps you set up, maintain, manage, and administer your relational databases on Google Cloud. In this video you’ll learn how Cloud SQL can help you with time-consuming tasks such as patches updates, replicas, and backups so you can focus on designing your application.

Cloud SQL is a fully-managed database service that helps you set up, maintain, manage, and administer your relational databases on Google Cloud. In this video you'll learn how Cloud SQL can help you with time-consuming tasks such as patches updates, replicas, and backups so you can focus on designing your application.

Cloud SQL in a minute

15. Memorystore in a minute

Memorystore is a fully managed and highly available in-memory service for Google Cloud applications. This tool can automate complex tasks, while providing top-notch security by integrating IAM protocols without increasing latency. Watch to learn what Memorystore is and what it can do to help in your developer projects.

Memorystore is a fully managed and highly available in-memory service for Google Cloud applications. This tool can automate complex tasks, while providing top-notch security by integrating IAM protocols without increasing latency. Watch to learn what Memorystore is and what it can do to help in your developer projects.

Memorystore in a minute

16. Bigtable in a minute

Cloud Bigtable is a fully managed, scalable NoSQL database service for large analytical and operational workloads. In this video, you’ll learn what Bigtable is and how this key-value store supports high read and write throughput, while maintaining low latency.

Cloud Bigtable is a fully managed, scalable NoSQL database service for large analytical and operational workloads. In this video, you'll learn what Bigtable is and how this key-value store supports high read and write throughput, while maintaining low latency.

Bigtable in a minute

17. BigQuery ML in a minute 

BigQuery ML lets you create and execute machine learning models in BigQuery by using standard SQL queries. In this video, learn how you can use BigQuery ML for your machine learning projects.

Learn how BigQuery ML enables users to create and execute machine learning models in BigQuery by using standard SQL queries.

BigQuery ML in a minute

18. Dataflow in a minute

Dataflow is a fully managed streaming analytics service that minimizes latency, processing time, and cost through autoscaling and batch processing. In this video, learn how it can be used to deploy batch and streaming data processing pipelines.

Dataflow is a fully managed streaming analytics service that minimizes latency, processing time, and cost through autoscaling and batch processing. In this video, learn how it can be used to deploy batch and streaming data processing pipelines.

Dataflow in a minute

19. Cloud Pub/Sub in a minute

Cloud Pub/Sub is an asynchronous messaging service that decouples services that produce events from services that process events. In this video, you’ll learn how you can use it for message storage, real-time message delivery, and much more, while still providing consistent performance at scale and high availability.

Cloud Pub/Sub is an asynchronous messaging service that decouples services that produce events from services that process events. In this video, learn how you can use Pub/Sub as messaging-oriented middleware or event ingestion and delivery for streaming analytics pipelines.

Cloud Pub/Sub in a minute

20. Dataproc in a minute

Dataproc is a managed service that lets you take advantage of open source data tools like Apache Spark, Flink and Presto for batch processing, SQL, streaming, and machine learning. In this video, you’ll learn what Dataproc is and how you can use it to simplify data and analytics processing.

Dataproc is a managed service that lets you take advantage of open source data tools like Apache Spark, Flink and Presto for batch processing, SQL, streaming, and machine learning. In this video, learn what Dataproc is and how you can use it to simplify data and analytics processing.

Dataproc in a minute

21. Data Fusion in a minute

Cloud Data Fusion is a fully managed, cloud-native, enterprise data integration service for quickly building and managing data pipelines. In this video, you’ll learn how Cloud Data Fusion can help you build smarter data marts, data lakes, and data warehouses.

Cloud Data Fusion is a fully managed, cloud-native, enterprise data integration service for quickly building and managing data pipelines. In this video, learn how Cloud Data Fusion can help you build smarter data marts, data lakes, and data warehouses.

Data Fusion in a minute

How to always show Fewer or More Details in File Transfer Dialog Box in Windows 10

The content below is taken from the original ( How to always show Fewer or More Details in File Transfer Dialog Box in Windows 10), to continue reading please visit the site. Remember to respect the Author & Copyright.

By default, when you initiate a file operation, which is basically Copy/Cut/Move/Paste or Delete, […]

This article How to always show Fewer or More Details in File Transfer Dialog Box in Windows 10 first appeared on TheWindowsClub.com.

GB Renewable generation forecast (including Octopus energy agile tariff) display using Inky wHAT and Pi Zero

The content below is taken from the original ( GB Renewable generation forecast (including Octopus energy agile tariff) display using Inky wHAT and Pi Zero), to continue reading please visit the site. Remember to respect the Author & Copyright.

GB Renewable generation forecast (including Octopus energy agile tariff) display using Inky wHAT and Pi Zero submitted by /u/Andybrace to r/raspberry_pi
[link] [comments]