Creating Packer images using AWS System Manager Automation

The content below is taken from the original ( Creating Packer images using AWS System Manager Automation), to continue reading please visit the site. Remember to respect the Author & Copyright.

If you run AWS EC2 instances in AWS, then you are probably familiar with the concept of pre-baking Amazon Machine Images (AMIs). That is, preloading all needed software and configuration on an EC2 instance, then creating an image of that. The resulting image can then be used to launch new instances with all software and configuration pre-loaded. This process allows the EC2 instance to come online and be available quickly. It not only simplifies deployment of new instances but is especially useful when an instance is part of an Auto Scaling group and is responding to a spike in load. If the instance takes too long to be ready, it defeats the purpose of dynamic scaling.

A popular tool used by customers to pre-bake AMIs is packer.io. Packer is an open source tool for creating identical machine images for multiple platforms from a single source configuration.

Packer is lightweight and many customers run it as part of a CI/CD pipeline. It is very easy to set up its execution with AWS CodeBuild, and make it part of your deployment process. However, many customers would like to manage the creation of their AMIs using AWS Systems Manager instead. We recently released the ability to execute custom Python or PowerShell scripts with Systems Manager Automation. See that details can be found here. With this new functionality, it is easier than ever to integrate packer with AWS Systems Manager Automation.

In this blog post, we walk through the process of using the new AWS:executeScript Automation action as part of a workflow to create custom AMIs using Packer. Before starting, you may want to get familiar with how the feature works. Here are some links for review:

Walkthrough

For our example, we are using an AWS-provided SSM document that automates the process of running Packer. You can find more information about this SSM document in the documentation.

Set up your environment

The first step to successfully build packer images with SSM is to set up all the requirements. Here is what you need:

  • AWS Account with administrator access so we can set up the different components. After the components are set up, it is possible to set up an IAM account with limited access to only execute the packer automation. More information can be found here.
  • IAM Role to execute the automation and also run the packer build. See the section on IAM credentials below.
  • Packer template file (we provide a sample one below for testing)

IAM credentials

To execute automation workflows, we must create an IAM role that can be used by the SSM service to perform the actions on your behalf. We’ve simplified the process of creating this role by providing a managed IAM Policy called AmazonSSMAutomationRole. This policy has the minimum requirements to execute Automation actions.

There is some good information on this topic here.

For Packer we also must add some additional permissions to be able to launch the temporary instance, tag it, and create the image. The packer features you are using dictates which additional permissions should be added to the role. For our example in this blog post, we use the steps provided here to create the role, and will also add the following inline policy to the role, for packer functions. This is only an sample and depending on how you are customizing your image, you may need to adjust it.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "iam:GetInstanceProfile"
            ],
            "Resource": [
                "arn:aws:iam::*:instance-profile/*"
            ],
            "Effect": "Allow"
        },
        {
            "Action": [
                "logs:CreateLogStream",
                "logs:DescribeLogGroups"
            ],
            "Resource": [
                "arn:aws:logs:*:*:log-group:*"
            ],
            "Effect": "Allow"
        },
        {
            "Action": [
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::packer-sample-bucket"
            ],
            "Effect": "Allow"
        },
{
            "Action": [
                "s3:GetObject"
            ],
            "Resource": [
                "arn:aws:s3:::packer-sample-bucket/*"
            ],
            "Effect": "Allow"
        },


        {
            "Action": [
                "ec2:DescribeInstances",
                "ec2:CreateKeyPair",
                "ec2:DescribeRegions",
                "ec2:DescribeVolumes",
                "ec2:DescribeSubnets",
                "ec2:DeleteKeyPair",
                "ec2:DescribeSecurityGroups"
            ],
            "Resource": [
                "*"
            ],
            "Effect": "Allow"
        }
    ]
}

You can use the following CloudFormation template to automate the creation of the IAM role required for this example:

AWSTemplateFormatVersion: "2010-09-09"
Description: "Template to create sample IAM role to execute packer automation using SSM"
Parameters: 
  PackerTemplateS3BucketLocation: 
    Type: String
    Description: Enter the name of the bucket where the packer templates will be stored. This is used to add permissions to the policy. For example, my-packer-bucket
Resources:
    SSMAutomationPackerRole:
        Type: "AWS::IAM::Role"
        Properties:
            RoleName: "SSMAutomationPackerCF"
            ManagedPolicyArns: [
              'arn:aws:iam::aws:policy/service-role/AmazonSSMAutomationRole'
            ]
            AssumeRolePolicyDocument: 
                Version: "2012-10-17"
                Statement: 
                  - 
                    Effect: "Allow"
                    Action: 
                      - "sts:AssumeRole"
                    Principal: 
                        Service: 
                          - "ec2.amazonaws.com"
                          - "ssm.amazonaws.com"

    SSMAutomationPackerInstanceProfile:
        Type: "AWS::IAM::InstanceProfile"
        Properties:
            InstanceProfileName: "SSMAutomationPackerCF"
            Roles:
              - !Ref SSMAutomationPackerRole

    SSMAutomationPackerInlinePolicy:
        Type: "AWS::IAM::Policy"
        Properties:
            PolicyName: "SSMAutomationPackerInline"
            PolicyDocument: 
                Version: "2012-10-17"
                Statement: 
                  - 
                    Effect: "Allow"
                    Action: 
                      - "iam:GetInstanceProfile"
                    Resource: 
                      - "arn:aws:iam::*:instance-profile/*"
                  - 
                    Effect: "Allow"
                    Action: 
                      - "logs:CreateLogStream"
                      - "logs:DescribeLogGroups"
                    Resource: 
                      - "arn:aws:logs:*:*:log-group:*"
                  - 
                    Effect: "Allow"
                    Action: 
                      - "s3:ListBucket"
                    Resource: 
                      - !Sub 'arn:aws:s3:::${PackerTemplateS3BucketLocation}'
                  - 
                    Effect: "Allow"
                    Action: 
                      - "s3:GetObject"
                    Resource: 
                      - !Sub 'arn:aws:s3:::${PackerTemplateS3BucketLocation}/*'
                  - 
                    Effect: "Allow"
                    Action: 
                      - "ec2:DescribeInstances"
                      - "ec2:CreateKeyPair"
                      - "ec2:DescribeRegions"
                      - "ec2:DescribeVolumes"
                      - "ec2:DescribeSubnets"
                      - "ec2:DeleteKeyPair"
                      - "ec2:DescribeSecurityGroups"
                    Resource: 
                      - "*"
            Roles: 
              - !Ref SSMAutomationPackerRole

    SSMAutomationPackerPassrolePolicy:
        Type: "AWS::IAM::Policy"
        Properties:
            PolicyName: "SSMAutomationPackerPassrole"
            PolicyDocument: 
                Version: "2012-10-17"
                Statement: 
                  - 
                    Sid: "SSMAutomationPackerPassrolePolicy"
                    Effect: "Allow"
                    Action: "iam:PassRole"
                    Resource: !GetAtt SSMAutomationPackerRole.Arn
            Roles: 
              - !Ref SSMAutomationPackerRole

Running a Packer automation workflow

Packer automates all the steps needed to configure an instance. For the purposes of this example, let’s say we need to configure an EC2 instance to be a LAMP (Linux, Apache, MariaDB, and PHP) server. This would require a number of steps and configuration changes. We have a good tutorial on how to do this manually here.

Let’s use packer to automate the process provided. If you walk through that example, you will notice that all the steps in that tutorial could be summarized with the following script. This script installs all the required software, then creates needed files.

sudo yum update -y 
sudo amazon-linux-extras install -y lamp-mariadb10.2-php7.2 php7.2 
sudo yum install -y httpd mariadb-server 
sudo systemctl start httpd 
sudo systemctl enable httpd 
sudo usermod -a -G apache ec2-user 
sudo chown -R ec2-user:apache /var/www 
sudo chmod 2775 /var/www && find /var/www -type d -exec sudo chmod 2775 {} \\; 
find /var/www -type f -exec sudo chmod 0664 {} \\; 
echo \"<?php phpinfo(); ?>\" > /var/www/html/phpinfo.php

Let’s use packer to perform all those steps and get our LAMP instance ready.

Packer uses a template file to define all the elements of the build process. We won’t go into all the details of how packer works, but you can find good information on that topic here.

Here is the packer template file we use:

{
    "builders": [
      {
        "type": "amazon-ebs",
        "region": "us-east-1",
        "source_ami": "ami-00068cd7555f543d5",
        "instance_type": "m5.large",
        "ssh_username": "ec2-user",
        "ami_name": "packer-testing-ebs-",
        "ssh_timeout": "5m",
        "iam_instance_profile": "SSMAutomationPackerCF",
        "vpc_id": "vpc-4a5b512f",
        "subnet_id": "subnet-6285e349",
        "security_group_id": "sg-7c3a7c1b",
        "associate_public_ip_address": true,
        "run_tags": {
          "Name": "web-server-packer"
        },
        "tags": {
          "Name": "webserver"
        }
      }
    ],
    "provisioners": [
      {
        "type": "shell",
        "inline": [
            "sudo yum update -y",
            "sudo amazon-linux-extras install -y lamp-mariadb10.2-php7.2 php7.2",
            "sudo yum install -y httpd mariadb-server",
            "sudo systemctl start httpd",
            "sudo systemctl enable httpd",
            "sudo usermod -a -G apache ec2-user",
            "sudo chown -R ec2-user:apache /var/www",
            "sudo chmod 2775 /var/www && find /var/www -type d -exec sudo chmod 2775 {} \\;",
            "find /var/www -type f -exec sudo chmod 0664 {} \\;",
            "echo \"<?php phpinfo(); ?>\" > /var/www/html/phpinfo.php"
        ]
      }
    ]
  }
  

Let’s walk through some of the details of this template.

The template is divided in 2 main sections: builder and provisioners. A builder is the target platform and its configuration, that packer will use to build the image. In this case, we will be using amazon-ebs as the builder. The provisioner is the mechanism used to install and configure software on the instance. Packer provides a multitude of provisioners, including popular configuration management tools like Ansible, Puppet, and Chef. For our example, we will be using the shell provisioner which allows us to execute shell commands on the instance.

On the builder section for the shell provisioner you will find the following directives:

  • region: The Region where the packer instance is launched. This should match the Region of the specified VPC and Subnet.
  • source_ami: The AMI we are starting from. In this case, we are using the latest Amazon Linux 2 provided AMI.
  • instance_type: The instance type that is launched by Packer. If you are performing complex installations or compiling software as part of the packer run, it may be a good idea to make this a bigger instance. That allows the process to complete faster and could result in cost savings.
  • ssh_username: The default ssh username defined on the source_ami. In this case we are using the standard ec2_user for Amazon Linux 2
  • ami_name: The name of the resulting AMI. The name should be descriptive and follow a standard nomenclature for tracking purposes. In this case, we are leveraging a couple of packer functions to make the AMI name unique. These functions will append the date and time in a clean format at the end of the AMI name.
  • ssh_timeout: How long packer will wait for SSH to be ready on the instance.
  • vpc_id and subnet_id: The VPC and subnet where the instance will launch. Make sure this matches the settings in your account.
  • associate_public_ip_address: Launch an EC2 instance with a public IP address associated. This is important since the instance will must communicate with the Systems Manager APIs and will need access to the public internet. You can optionally configure Systems Manager to use an interface VPC endpoint in Amazon Virtual Private Cloud. But you must set this flag to true for packer to work properly. There is more information on that here and here.

Modify the provided sample file with the correct values for your environment. Then save the file to an S3 bucket for later use. Please note this should be the same bucket specified when creating the IAM Role that will be used to execute the automation. See the section on IAM Credentials on this blog post. Now that we have all the pieces to run the packer automation, let’s put it all together and walk through an example of how you do it on the console.

Step by step process

  1. Log in to the AWS Management Console with Administrator privileges.
  2. Click on Services, then go to the Systems Manager option.
  3. On the left pane under Actions and Change click on “Automation”
  4. Click on the “Execute Automation” button.
  5. On the Automation Document search field, enter “AWS-RunPacker”. This displays the AWS-RunPacker document in the results, click on it.
  6. Now, from within the detail of the AWS-RunPacker document, click on “Execute Automation”.
  7. Select Simple Execution.
  8. On the input parameters section for the TemplateS3BucketName field, enter the name of the bucket where you saved the packer template file. Ensure the IAM Role has access to the bucket.
  9. On the TemplateFileName field, enter the name of the template file you created. For example, image.json
  10. On the mode drop-down, select Build. Note, you can also at this point select validate or fix. Validate will check if your template file is valid, and fix will find backwards incompatible parts of the template. It will also bring it up to date, so it can be used with the latest version of Packer.
  11. On the “Force” field select True. This forces the build to run even if it finds an image with the same name.
  12. On the “AutomationAssumeRole” field, enter the ARN of the IAM Role that was created on the section IAM Credentials of this blog post. This is the IAM Role that is used to execute the automation workflow and build the packer image.
  13. Click on “Execute”. This starts the workflow and take you to the Execution Detail screen. Here you can monitor the progress of the execution.

Once the execution completes successfully, you have a brand-new AMI with all the components of a LAMP server. You can modify and customize this workflow to suit your needs, and simplify your deployments using Systems Manager and Packer.

Please note that there is a limit of 600 seconds for the execution of the aws:executeScript component that we are using to launch Packer. Because of that, if you add many provisioner steps you must ensure that the execution will not exceed this limit. One option to work around this is to use a larger instance type that will perform the steps faster.

Conclusion

In this blog post, we learned how to use the AWS-RunPacker SSM Document to create an automation workflow to build Packer images. Using SSM as part of your image pre-baking process improves your security posture, simplifies the process of building images and also provides auditability and centralized management.  For a complete solution to handle image creation pipelines, please reference the EC2 Image Builder service.  To learn more visit this blog.

About the Author

Andres Silva is a Principal Technical Account Manager for AWS Enterprise Support. He has been working with AWS technology for more than 9 years. Andres works with Enterprise customers to design, implement and support complex cloud infrastructures. When he is not building cloud automation he enjoys skateboarding with his 2 kids.

 

IOT with web interface

The content below is taken from the original ( in /r/ IOT), to continue reading please visit the site. Remember to respect the Author & Copyright.

I need some help im starting up a company that will be logging data from several 100 devices. The data will consist of Internal temperature as well as outdoor temperature barometric pressure humility and audio frequency. The data will update every 1,5,15,30 min depending on the users request. my question is im going to need to keep this historical data for about a year. I want to take this data and display it to the user in a nice chart (line pie graph). What is the best way to go about doing this?

I have seen some IOT dashboards that allow you to set up a web hook and capture your data from the sensor and send it to there website and it displays the content really nice. But so far i have not found any options to get it from the IOT DB to my website.

Flashing Sonoff Devices With Tasmota Gets Easier

The content below is taken from the original ( Flashing Sonoff Devices With Tasmota Gets Easier), to continue reading please visit the site. Remember to respect the Author & Copyright.

Tasmota is an alternative firmware for ESP boards  that provides a wealth of handy features, and [Mat] has written up a guide to flashing with far greater ease by using Tasmotizer. Among other things, it makes it simple to return your ESP-based devices, like various Sonoff offerings, to factory settings, so hack away!

Tasmotizer is a front end that also makes common tasks like backing up existing firmware and setting configuration options like, WiFi credentials, effortless. Of course, one can’t really discuss Tasmotizer without bringing up Tasmota, the alternative firmware for a variety of ESP-based devices, so they should be considered together.

Hacks based on Sonoff devices are popular home automation projects, and [Mat] has also written all about what it was like to convert an old-style theromostat into a NEST-like device for about $5 by using Tasmota. A video on using Tasmotizer is embedded below, so give it a watch to get a head start on using it to hack some Sonoff devices.

Password killer FIDO2 comes bounding into Azure Active Directory hybrid environments

The content below is taken from the original ( Password killer FIDO2 comes bounding into Azure Active Directory hybrid environments), to continue reading please visit the site. Remember to respect the Author & Copyright.

A preview of muddy paws all over your on-prem resources, or a passwordless future?

Hybrid environments can now join the preview party for FIDO2 support in Azure Active Directory.…

A Crash Course In Thermodynamics For Electrical Engineers

The content below is taken from the original ( A Crash Course In Thermodynamics For Electrical Engineers), to continue reading please visit the site. Remember to respect the Author & Copyright.

It’s a simple fact that, in this universe at least, energy is always conserved. For the typical electronic system, this means that the energy put into the system must eventually leave the system. Typically, much of this energy will leave a system as heat, and managing this properly is key to building devices that don’t melt under load. It can be a daunting subject for the uninitiated, but never fear — Adam Zeloof delivered a talk at Supercon 2019 that’s a perfect crash course for beginners in thermodynamics.

Adam’s talk begins by driving home that central rule, that energy in equals energy out. It’s good to keep in the back of one’s mind at all times when designing circuits to avoid nasty, burning surprises. But it’s only the first lesson in a series of many, which serve to give the budding engineer an intuitive understanding of the principles of heat transfer. The aim of the talk is to avoid getting deep into the heavy underlying math, and instead provide simple tools for doing quick, useful approximations.

Conduction and Convection

Conduction is the area first explored, concerning the transfer of heat between solid materials that are touching. Adam explains how this process is dependent on surface area and how this can be affected by surface condition, and the reasons why we use thermal paste when fitting heatsinks to chips. The concept is likened to that of electrical resistance, and comparisons are drawn between heat transfer equations and Ohm’s law. Thermal resistances can be calculated in much the same way, and obey the same parallel and series rules as their electrical counterparts.

With conduction covered, the talk then moves on to discussion of convection — where heat is passed from a solid material to the surrounding fluid, be it a liquid or a gas. Things get a little wilder here, with the heat transfer coefficient h playing a major role. This coefficient depends on the a variety of factors, like the fluid in question, and how much it’s moving. For example, free convection in still air may only have a coefficient of 5, whereas forced air cooling with a fan may have a coefficient of 50, drawing away 10 times as much heat. Adam discusses the other factors involved in convection, and how surface area has a major role to play. There’s a great explanation of why heatsinks use fins and extended surfaces to increase the heat transfer rate to the fluid.

Modelling Thermodynamics

[Adam] demonstrated a heat transfer simulation running on the Hackaday Superconference badge, to much applause.

With the basics out of the way, it’s then time to discuss an example. Given the talk is aimed at an electrical engineering audience, Adam chose to cover the example of a single chip in the middle of a printed circuit board. In three dimensions, the math quickly becomes complex, with many differential equations required to cover conduction and all the various surfaces for convection. Instead, the simulation is simplified down to a quasi-1-dimensional system. Some imperfect assumptions are made to simplify the calculations. While these are spurious and don’t apply in many circumstances, chosen properly, they enable the simple solution of otherwise intractable problems — the magic of engineering! After showing the basic methods involved, Adam shows how such an analysis can be used to guide selection of different cooling methods or heatsink choices, or make other design decisions.

The talk is a great primer for anyone wanting to take a proper engineering approach to solving thermal problems in their designs. And, as a final party piece, Adam closed out the talk with a demonstration of a heat transfer simulation running on the conference badge itself. Thermodynamics can be a dry topic to learn, so it’s great to see a straightforward, intuitive, and engineering-focused approach presented for a general technical audience!

Multi-SSO and Cloudflare Access: Adding LinkedIn and GitHub Teams

The content below is taken from the original ( Multi-SSO and Cloudflare Access: Adding LinkedIn and GitHub Teams), to continue reading please visit the site. Remember to respect the Author & Copyright.

Multi-SSO and Cloudflare Access: Adding LinkedIn and GitHub Teams

Cloudflare Access secures internal applications without the hassle, slowness or user headache of a corporate VPN. Access brings the experience we all cherish, of being able to access web sites anywhere, any time from any device, to the sometimes dreary world of corporate applications. Teams can integrate the single sign-on (SSO) option, like Okta or AzureAD, that they’ve chosen to use and in doing so make on-premise or self-managed cloud applications feel like SaaS apps.

However, teams consist of more than just the internal employees that share an identity provider. Organizations work with partners, freelancers, and contractors. Extending access to external users becomes a constant chore for IT and security departments and is a source of security problems.

Cloudflare Access removes that friction by simultaneously integrating with multiple identity providers, including popular services like Gmail or GitHub that do not require corporate subscriptions. External users login with these accounts and still benefit from the same ease-of-use available to internal employees. Meanwhile, administrators avoid the burden in legacy deployments that require onboarding and offboarding new accounts for each project.

We are excited to announce two new integrations that make it even easier for organizations to work securely with third parties. Starting today, customers can now add LinkedIn and GitHub Teams as login methods alongside their corporate SSO.

The challenge of sharing identity

If your team has an application that you need to share with partners or contractors, both parties need to agree on a source of identity.

Some teams opt to solve that challenge by onboarding external users to their own identity provider. When contractors join a project, the IT department receives help desk tickets to create new user accounts in the organization directory. Contractors receive instructions on how to sign-up, they spend time creating passwords and learning the new tool, and then use those credentials to login.

Multi-SSO and Cloudflare Access: Adding LinkedIn and GitHub Teams

This option gives an organization control of identity, but adds overhead in terms of time and cost. The project owner also needs to pay for new SSO seat licenses, even if those seats are temporary. The IT department must spend time onboarding, helping, and then offboarding those user accounts. And the users themselves need to learn a new system and manage yet another password – this one with permission to your internal resources.

Alternatively, other groups decide to “federate” identity. In this flow, an organization will connect their own directory service to their partner’s equivalent service. External users login with their own credentials, but administrators do the work to merge the two services to trust one another.

While this method avoids introducing new passwords, both organizations need to agree to dedicate time to integrate their identity providers – assuming that those providers can integrate. Businesses then need to configure this setup with each contractor or partner group. This model also requires that external users be part of a larger organization, making it unavailable to single users or freelancers.

Both options must also address scoping. If a contractor joins a project, they probably only need access to a handful of applications – not the entire portfolio of internal tools. Administrators need to invest additional time building rules that limit the scope of user permission.

Additionally, teams need to help guide external users to find the applications they need to do their work. This typically ends up becoming a one-off email that the IT staff has to send to each new user.

Multi-SSO with Cloudflare Access

Cloudflare Access replaces corporate VPNs with Cloudflare’s network. Instead of placing internal tools on a private network, teams deploy them in any environment, including hybrid or multi-cloud models, and secure them consistently with Cloudflare’s network.

Administrators build rules to decide who should be able to reach the tools protected by Access. In turn, when users need to connect to those tools, they are prompted to authenticate with their team’s identity provider. Cloudflare Access checks their login against the list of allowed users and, if permitted, allows the request to proceed.

Multi-SSO and Cloudflare Access: Adding LinkedIn and GitHub Teams

With Multi-SSO, this model works the same way but extends that login flow to other sign-in options. When users visit a protected application, they are presented with the identity provider options your team configures. They select their SSO, authenticate, and are redirected to the resource if they are allowed to reach it.

Multi-SSO and Cloudflare Access: Adding LinkedIn and GitHub Teams

Cloudflare Access can also help standardize identity across multiple providers. When users login, from any provider, Cloudflare Access generates a signed JSON Web Token that contains that user’s identity. That token can then be used to authorize the user to the application itself. Cloudflare has open sourced an example of using this token for authorization with our Atlassian SSO plugin.

Whether the identity providers use SAML, OIDC, or another protocol for sending identity to Cloudflare, Cloudflare Access generates standardized and consistent JWTs for each user from any provider. The token can then be used as a common source of identity for applications without additional layers of SSO configuration.

Onboard contractors seamlessly

With the Multi-SSO feature in Cloudflare Access, teams can onboard contractors in less than a minute without paying for additional identity provider licenses.

Organizations can integrate LinkedIn, GitHub, or Google accounts like Gmail alongside their own corporate identity provider. As new partners join a project, administrators can add single users or groups of users to their Access policy. Contractors and partners can then login with their own accounts while internal employees continue to use the SSO provider already in place.

Multi-SSO and Cloudflare Access: Adding LinkedIn and GitHub Teams

With the Access App Launch, administrators can also skip sending custom emails or lists of links to new contractors and replace them with a single URL. When external users login with LinkedIn, GitHub, or any other provider, the Access App Launch will display only the applications they can reach. In a single view, users can find and launch the tools that they need.

The Access App Launch automatically generates this view for each user without any additional configuration from administrators. The list of apps also updates as permissions are added or removed.

Multi-SSO and Cloudflare Access: Adding LinkedIn and GitHub Teams

Integrate mergers and acquisitions without friction

Integrating a new business after a merger or acquisition is a painful slog. McKinsey estimates that reorganizations like these take 41% longer than planned. IT systems are a frequent, and expensive, reason. According to data from Ernst and Young, IT work represents the third largest one-time integration cost after a merger or acquisition – only beat by real estate and payroll severance.

Cloudflare Access can help cut down on that time. Customers can integrate their existing SSO provider and the provider from the new entity simultaneously, even if both organizations share the same identity provider. For example, users from both groups can continue to login with separate identity services without disruption.

IT departments can then start merging applications or deprecating redundant systems from day one without worrying about breaking the login flow for new users.

Zero downtime SSO migrations

If your organization does not need to share applications with external partners, you can still use Multi-SSO to reduce the friction of migrating between identity providers.

Organizations can integrate both the current and the new provider with Cloudflare Access. As groups within the organization move to the new system, they can select that SSO option in the Cloudflare Access prompt when they connect. Users still on the legacy system can continue to use the provider being replaced until the entire team has completed the cutover.

Regardless of which option users select, Cloudflare Access will continue to capture comprehensive and standard audit logs so that administrators do not lose any visibility into authentication events during the migration.

Getting started

Cloudflare Access’ Multi-SSO feature is available today for more than a dozen different identity providers, including the options for LinkedIn and GitHub Teams announced today. You can follow the instructions here to start securing applications with Cloudflare Access. The first five users are free on all plans, and there is no additional cost to add multiple identity providers.

Now generally available: Managed Service for Microsoft Active Directory (AD)

The content below is taken from the original ( Now generally available: Managed Service for Microsoft Active Directory (AD)), to continue reading please visit the site. Remember to respect the Author & Copyright.

A few months ago, we launched Managed Service for Microsoft Active Directory (AD) in public beta. Since then, our customers have created more than a thousand domains to evaluate the service in their pre-production environments. We’ve used the feedback from these customers to further improve the service and are excited to announce that Managed Service for Microsoft AD is now generally available for everyone and ready for your production workloads.

Simplifying Active Directory management.png

Simplifying Active Directory management

As more AD-dependent apps and servers move to the cloud, you might face heightened challenges to meet latency and security goals, on top of the typical maintenance challenges of configuring and securing AD Domain Controllers. Managed Service for Microsoft AD can help you manage authentication and authorization for your AD-dependent workloads, automate AD server maintenance and security configuration, and connect your on-premises AD domain to the cloud. The service delivers many benefits, including:

  • Compatibility with AD-dependent apps. The service runs real Microsoft AD Domain Controllers, so you don’t have to worry about application compatibility. You can use standard Active Directory features like Group Policy, and familiar administration tools such as Remote Server Administration Tools (RSAT), to manage the domain. 

  • Virtually maintenance-free. The service is highly available, automatically patched, configured with secure defaults, and protected by appropriate network firewall rules.

  • Seamless multi-region deployment. You can deploy the service in a specific region to enable your apps and VMs in the same or other regions to access the domain over a low-latency Virtual Private Cloud (VPC). As your infrastructure needs grow, you can simply expand the service to additional regions while continuing to use the same managed AD domain.

  • Hybrid identity support. You can connect your on-premises AD domain to Google Cloud or deploy a standalone domain for your cloud-based workloads.
admin experience.png
The admin experience.

You can use the service to simplify and automate familiar AD tasks like automatically “domain joining” new Windows VMs by integrating the service with Cloud DNS, hardening Windows VMs by applying Group Policy Objects (GPOs), controlling Remote Desktop Protocol (RDP) access through GPOs, and more. For example, one of our customers, OpenX, has been using the service to reduce their infrastructure management work:

“Google Cloud’s Managed AD service is exactly what we were hoping it would be. It gives us the flexibility to manage our Active Directory without the burden of having to manage the infrastructure,” said Aaron Finney, Infrastructure Architecture, OpenX. “By using the service, we are able to solve for efficiency, reduce costs, and enable our highly-skilled engineers to focus on strategic business objectives instead of tactical systems administration tasks.”

And our partner, itopia, has been leveraging Managed AD to make the lives of their customers easier: “itopia makes it easy to migrate VDI workloads to Google Cloud and deliver multi-session Windows desktops and apps to users on any device. Until now, the customer was responsible for managing and patching AD. With Google Cloud’s Managed AD service, itopia can deploy cloud environments more comprehensively and take away one more piece of the IT burden from enterprise IT staff,” said Jonathan Lieberman, CEO, itopia. “Managed AD gives our customers even more incentive to move workloads to the cloud along with the peace of mind afforded by a Google Cloud managed service.”

Getting started

To learn more about getting started with Managed Service for Microsoft AD now that it’s generally available, check out the quickstart, read the documentation, review pricing, and watch the webinar.

Low code programming with Node-RED comes to GCP

The content below is taken from the original ( Low code programming with Node-RED comes to GCP), to continue reading please visit the site. Remember to respect the Author & Copyright.

Wouldn’t it be great if building a new application were as easy as performing some drag and drop operations within your web browser? This article will demonstrate how we can achieve exactly that for applications hosted on Google Cloud Platform (GCP) with Node-RED, a popular open-source development and execution platform that lets you build a wide range of solutions using a visual programming style, while still leveraging GCP services.

1 node red.png

Through Node-RED, you create a program (called a flow) using supplied building blocks (called nodes). Within the browser, Node-RED presents a canvas area alongside a palette of available nodes. You then drag and drop nodes from the palette onto the canvas and link those nodes together by drawing connecting wires. The flow describes the desired logic to be performed by specifying the steps and their execution order, and can then be deployed to the Node-RED execution engine.

One of the key features that has made Node-RED successful is its ability to be easily extended with additional custom nodes. Whenever a new API or technology becomes available, it can be encapsulated as a new Node-RED node and added to the list of available nodes found in the palette. From the palette, it can then be added into a flow for use in exactly the same way that the base supplied nodes are used. These additional nodes can then be published by their authors as contributions to the Node-RED community and made available for use in other  projects. There is a searchable and indexed catalog of contributed Node-RED nodes.

A node hides how it internally operates and exposes a clean consumable interface allowing the new function to be used faster. 

Now, let’s take a look at how to run Node-RED on GCP and use it with GCP services.

Installing Node-RED

You can use the Node Package Manager (npm) to install Node-RED on any environment that has a Node.JS runtime. For GCP, this includes Compute Engine, Google Kubernetes Engine (GKE), Cloud Run, Cloud Shell as well as other GCP environments. There’s also a publically available Docker image, which is what we’ll use for this example.

Now, let’s create a Compute Engine instance using the Google Cloud Console and specify the public Node-RED docker image for execution.

Visit the Cloud Console and navigate to Compute Engine. Create a new Compute Engine instance. Check the box labeled “Deploy a container image to this VM instance“.  Enter “nodered/node-red” for the name of the container image:

2 install node red.png

You can leave all the other settings as their defaults and proceed to completing the VM creation.

Once the VM has started, Node-RED is running. To work with Node-RED, you must connect to it from a browser. Node-RED listens on port 1880. The default VPC network firewall deliberately restricts incoming requests which means that requests to port 1880 will be denied. The next step will be to allow a connection into our network at the Node-RED port. We strongly discourage you from opening up Node-RED for development for unrestricted access. Instead, define the firewall rule to only allow ingress from the IP address that your browser presents. You can find your own browser address by performing a Google search on “my ip address”.

Connecting to Node-RED

Now that Node-RED is running on GCP, you can connect to it from a browser, by passing the external public IP address of the VM at port 1880.  For example:

http://35.192.185.114:1880

You can now see the Node-RED development environment within your browser:

3 Connecting to Node-RED.png

Working with GCP nodes

At this point, you have Node-RED running on GCP and can start constructing flows by dragging and dropping nodes from the palette onto the canvas and wiring them together. The nodes that come pre-supplied are merely a starter set—there are many more available that you can install and use in future flows. 

At Google, we’ve built a set of GCP nodes to illustrate how to extend Node-RED to interact with GCP functions. To install these nodes, navigate to the Node-RED system menu and select “Manage palette“:

4 deploying node-red.png

Switch to the Palette tab and then switch to the Install tab within Palette. Search for the node set called “node-red-contrib-google-cloud” and then click install.

5 install node-red.png

Once installed, scroll down through the list of available palette nodes and you’ll find a GCP section containing the currently available GCP building blocks.

6 gcp building blocks.png

Here’s a list of currently available GCP nodes:

  • PubSub in – The flow is triggered by the arrival of a new message associated with a named subscription

  • PubSub out – A new message is published to a named topic

  • GCS read – Reads the content of a Cloud Storage object

  • GCS write – Writes to a new Cloud Storage object

  • Language sentiment – Performs sentiment analysis on a piece of text

  • Vision – Analyzes an image for distinct attributes

  • Log – Writes a log message to Stackdriver Logging

  • Tasks – Initiates a Cloud Tasks instance

  • Monitoring – Writes a new monitoring record to Stackdriver

  • Speech to Text – Converts audio input to a textual data representation

  • Translate – Converts textual data from one language to another

  • DLP – Performs Data Loss Prevention processing on input data

  • BigQuery – Interacts with Google’s BigQuery database

  • FireStore – Interacts with Google’s Firestore database

  • Metadata – Retrieves the metadata for the Compute Engine upon which Node-RED is running

Going forward, we hope to make additional GCP nodes available. It’s also not hard to create a custom node yourself—check out the public Github repository to see how easy it is to create one.

A sample Node-RED flow

Here is an example flow:

7 sample Node-RED flow.png

At a high level, this flow listens on incoming REST requests and creates a new Google Cloud Storage object for each request received.

This flow starts with an HTTP input node which causes Node-RED to listen on the /test URL path for an HTTP GET request. When an incoming REST request arrives, the incoming data undergoes some manipulations:

8 properties.png

Specifically, two fields are set: one called msg.filename, which is the name of a file to create in Cloud Storage, and the other called msg.payload, which is the content of the new file we are creating. In this example, the query parameters passed in the HTTP request are being logged.

The next node in the flow is a GCP node that performs a Cloud Storage object write that writes/creates a new file. The final node sends a response back concluding the original HTTP request that triggered the flow.

Securing Node-RED

Node-RED is designed to get you up and running as quickly as possible. To that end, the default environment isn’t configured for security. We don’t recommend this. Fortunately, Node-RED provides security features that can be quickly enabled.  These features include authorization to be able to make flow changes and enablement of SSL/TLS for encryption of incoming and outgoing data. When initially studying Node-RED, define a firewall rule that only permits ingress from your browser’s IP address.

Visual programming on GCP the Node-RED way

Node-RED has proven itself as a data flow and event processor for many years. Its extremely simple architectural model and low barrier to entry means that even a novice user can get value from it in a very short period of time. A quick Internet search reveals many tutorials on YouTube, the documentation is mature and polished, and the community active and vibrant. With the addition of the rich set of GCP nodes that we’ve contributed to the community, you can now incorporate GCP services into Node-Red whether it’s hosted on GCP, on another public cloud, or on-premises. 

References

Sky is New Limit for Dot Com Domain Prices

The content below is taken from the original ( Sky is New Limit for Dot Com Domain Prices), to continue reading please visit the site. Remember to respect the Author & Copyright.

Earlier this week, domain name registrar Namecheap sent out an email to all customers advising them of a secret deal that went down between ICANN and Verisign sometime late last year. It has the potential to change the prices of domain names drastically over time, and thus change the makeup of the Internet as we know it.

Domain names aren’t really owned, they’re rented with an option to renew, and the annual rate that you pay depends both on your provider’s markup, but also on a wholesale rate that’s the same for all names in that particular domain. This base price is set by ICANN, a non-profit.

Officially, this deal is a proposed Amendment 3 to the contract in place between Verisign and ICANN that governs the “.com” domain. The proposed amendment would let Verisign increase the wholesale rental price of “.com” domain names by 7% per year for the next four years. Then there will be a two-year breather, followed by another four years of 7% annual hikes. And there is no foreseeable end to this cycle. We think it seems reasonable to assume that the domain name registrars might pass the price gouging on to the consumer, but that really remains to be seen.

The annual wholesale domain name price has been sitting at $7.85 since 2012, and as of this writing, Namecheap is charging $8.88 for a standard “.com” address. If our math is correct, ten years from now, a “.com” domain will cost around $13.50 wholesale and $17.50 retail. This almost-doubling in price will affect both small sites and companies that hold many domain names. And the increase will only get more dramatic with time.

So let’s take a quick look at the business of domain names.

The backs of the racks via @tvick on unsplash

They CANN and They Will

The Internet Corporation for Assigned Names and Numbers (ICANN) formed in 1998 with the intent to coordinate, allocate, and assign domain names and IP addresses, assign protocols, and more. ICANN is also responsible for the thirteen root name servers that make up the Internet, and they’re the reason you type words instead of numbers when you want to visit a website. They officially operate as a not-for-profit, public-benefit corporation.

Verisign was founded in 1995 and got their start issuing SSL certificates. They became an internet superpower when they acquired Network Solutions in 2000 and took over the company’s registry functions. As part of this new deal, Verisign will be able to operate as a domain name registrar, stopping just short of being able to sell “.com” real estate themselves, although they could potentially act as a reseller through another company.

As part of the proposed amendment, Verisign will give ICANN $20 million over the next five years, beginning January 2021. Although it isn’t exactly clear how they’ll spend the money, it’s supposed to be earmarked for continued support of things ICANN were already doing, like mitigating threats to DNS security, governing the root name server system, and policing possible name collisions. But people have questioned ICANN’s transparency and accountability — so far, there doesn’t seem to be a system in place to verify that the funds aren’t misappropriated.

ICANN has transparency? Image via ICANN

What’s a Web Address Worth?

If domains are too cost-prohibitive, then only the rich can stake a claim in cyberspace, and democracy dies in that regard. Conversely, if land is too cheap, cyber-squatters will snatch up URLs and/or dilute the web with snakeoil sites. Any right answer will need to balance these offsetting effects.

Inflation drives the prices of all other goods up, why not domain names?  But is the rate too high? The average inflation rate in the US runs under 3% per year, and hasn’t seen 7% in ages.

What do you think, Hackaday universe? Is this increase schedule cause for alarm, or is it just business as usual?

We think ICANN could have at least notified registrars sooner, but that may have given consumers too much time to complain. This isn’t the first time that ICANN has ignored public comment in recent memory — last summer when there was talk of removing price caps on “.org” domains, many people commented in favor of keeping prices capped on the other legacy TLDs, and ICANN completely ignored them. A few months later, the .org registry was purchased by a private equity firm, and the details are still being worked out. Is ICANN still working for the public good?

In the tradition of begging forgiveness later, and for all the good it’ll do, ICANN has an open comment period until Friday, February 14th. So go tell ’em how you feel, even if it feels like screaming into the void.

New: PropertySystemView v1.00

The content below is taken from the original ( New: PropertySystemView v1.00), to continue reading please visit the site. Remember to respect the Author & Copyright.

PropertySystemView is a tool that allows you view and modify the properties of file from GUI and command-line, using the property system of Windows operating system. For example, you can change the ‘Media Created’ timestamp stored in .mp4 files (System.Media.DateEncoded) as well as other metadata stored in media files and office documents, like Title, Comments, Authors, Tags, Date Acquired, Last Saved Date, Content Created Date, Date Imported, Date Taken (EXIF of .jpg files), and more…
PropertySystemView also allows you to set properties of Windows. For example, you can set the System.AppUserModel.ID property of a window in order to disable the taskbar grouping of the specified window.

Admin Essentials: Configuring Chrome Browser in your VDI environment

The content below is taken from the original ( Admin Essentials: Configuring Chrome Browser in your VDI environment), to continue reading please visit the site. Remember to respect the Author & Copyright.

As a Chrome Enterprise Customer Engineer, I often get asked by administrators of virtual desktop infrastructure (VDI) environments what our best practices are for backing up user profiles. For example, many ask us how to minimize backup size to speed up user log-in and log-off into the Windows environment and reduce impact on the overall user experience.

Like any browser, Chrome has cache directories. This is where data is temporarily stored for faster future site loading, cookies are saved in order to provide seamless authentication on websites, extensions cache various resources, and more. Chrome stores all of its caches in folders in the user profile directory. 

VDI administrators may prefer to back up the entire Chrome user profile directory, but the more sites a user accesses, the more the size of the cache folder increases, and the number of small files in those folders can become quite large. This can result in an increased user profile folder backup time. For users, this can lead to slower startup time for Chrome. 

Although we’ll cover different scenarios today, Google Sync is still our recommended method for syncing browser profile data between machines. It provides the best experience for both the user and the administrator as users only need to sign in. However, there are some environments where this option isn’t suitable for technical or policy reasons. If you can’t use Google Sync, there are a few approaches that can be used to minimize the backup size.

Moving the cache folders

One option is for administrators to move the cache folders outside of Chrome’s user profile folder. The VDI administrator will need to identify a folder outside of the Chrome user profile directory where the caches will be stored. Caches should still be in the Windows user’s directory, and keeping them in hidden directories can also reduce the risk of the cache being accidentally deleted. 

Examples of such folder shortcuts would be:

  • ${local_app_data}/Chrome Cache

  • ${profile}/Chrome Cache

The user data directory variables can help you specify the best directory for your caches.

Once the folder location has been decided, administrators need to configure the DiskCacheDir policy that relocates the cache folders. This policy can be configured either via Group Policy or registry. Once the policy configuration has been applied onto the machines, Chrome will start storing the cache directories into the newly defined cache folder location. The administrator might have to do a cleanup of older caches from the user profile folder the first time after enabling this policy as the policy does not remove the old caches.

Then, continue using the standard Chrome user profile directory. This should result in faster startup times for Chrome, as less data will be copied when a user signs-on or signs-off. It’s important to note that this approach will not allow simultaneous sessions from different machines, but it will preserve session data.

Enabling Roaming Profile Support

A second option is to enable the Chrome Roaming Profile Support feature. This will also not allow simultaneous sessions from different machines, and it won’t save a user’s concurrent session data. However, it will enable you to move the Chrome profile into network storage and load it from there. In this scenario, network performance could impact Chrome’s startup time.

To enable Chrome Roaming Profile Support: 

  • Switch on the ​Roaming​Profile​Support​Enabled policy.

  • Optional: Use the RoamingProfileLocation policy to specify the location of the roaming profile data, if this is how you’ve configured your environment. The default is ${roaming_app_data}\Google\Chrome\User Data.

  • If you have been using the UserDataDir policy to relocate the regular Chrome profile to a roaming location, make sure to revert this change.

Advanced controls

While the solutions above will work for most enterprises, there are organizations that want more granular control of the files that are backed up. The approach below allows for more control, but comes at a higher risk, as file names or locations can change at any moment with a Chrome version release. A granular file backup could introduce data corruption, but unlike the other options, it will preserve session data. Here is how to set it up: 

  • Set disk cache to ${local_app_data}\Google\Chrome\User Data with the DiskCacheDir flag.

  • Set user profile to ${roaming_app_data}\Google\Chrome\User Data with the UserDataDir flag.

  • Back up the following files in your VDI configuration:

    • Folder location: AppData\Roaming\Google\Chrome\User Data\.

    • Files: First Run, Last Version, Local State, Safe Browsing Cookies, Safe Browsing Cookies-journal, Bookmarks, Cookies, Current Session, Current Tabs, Extension Cookies, Favicons, History, Last Session, Last Tabs, Login Data, Login Data-journal, Origin Bound Certs, Preferences, Shortcuts, Top Sites, Web Data, Web Data-journal.

Even though this approach preserves session data, it will not enable simultaneous sessions from different machines. 

There you have it—three different approaches IT teams can take to store Chrome caches in VDI environments. Keep in mind that there are a few ways an administrator can push policies onto a machine. For all desktop platforms, Google offers the Chrome Browser Cloud Management (CBCM) console as a one-stop shop for all policy deployments and it allows the admin to set one policy that can be deployed on any desktop OS and Chrome OS. For Windows, the admin can also use GPO or registry settings. For Mac, they can use managed preferences. These templates and more info can be found at chrome.com/enterprise.

If you’d like to learn more about the management options that we make available to IT teams, please visit our Chrome Enterprise web site.

Trying to Create a Script using Microsoft Forms to Create AD and O365 Accounts

The content below is taken from the original ( in /r/ PowerShell), to continue reading please visit the site. Remember to respect the Author & Copyright.

Hey guys,

I am looking into taking results from an excel file that Microsoft Forms create after filled out by HR, then somehow take that data and create a user in our local AD and also sync it with O365. The form contains distribution groups, group memberships, email addresses, first and last names. I am just having a little trouble beginning this task as I have never wrote a script before and was looking for some great minds to assist me. Any help is greatly appreciated and I will happily answer any questions!

Petition asking Microsoft to open-source Windows 7 sails past 7,777-signature goal

The content below is taken from the original ( Petition asking Microsoft to open-source Windows 7 sails past 7,777-signature goal), to continue reading please visit the site. Remember to respect the Author & Copyright.

The Free Software Foundation really set the bar high there

Good news everybody! The Free Software Foundation has blown through its self-imposed target of 7,777 signatories in its efforts to persuade Microsoft to make Windows 7 open source.…

Python: What Is It and Why Is It so Popular?

The content below is taken from the original ( Python: What Is It and Why Is It so Popular?), to continue reading please visit the site. Remember to respect the Author & Copyright.

What is Python?

Python is a general-purpose programming language. If you see any survey on popular programming languages over the last few years, Python always comes on top of the demand chart each year.

People who hate programming are tempted to change their minds by the simplicity of Python. Most of the job listings have Python somewhere as part of the job description. Let us see what makes Python so special in this decade and future. Candidates and companies upskill or reskill in Python irrespective of their qualifications, roles, and experience. Needless to say, Python is very easy to learn.

Cloud Academy offers a Python for Beginners Learning Path that provides an ideal entry point for people wanting to learn how to program with the Python scripting language. The learning path includes courses, exams, hands-on labs, and lab challenges that give you the knowledge and real-world experience you need.

Cloud Academy Python for Beginners

What is a programming language?

Let us spend a few minutes on understanding what is a programming language. If you have a programming background, this section is going to be a quick recap for you. Computers have built-in powerful components such as CPUs, memory, etc. CPUs can perform millions of instructions per second. How can we use computation and storage capability for our own benefit?

For example, I want to process payroll data for thousands of employees to display hundreds of products on an eCommerce website. Programming language helps us give instructions to computers to perform some activities. You might have heard about Java, Python, C, C++, PHP, COBOL, .NET, BASIC as they are examples of high-level programming languages.

We all know that computers can understand only electronic signals. There are only two possible things computers know: Presence of a signal(one) and the absence of a signal(zero). Computers use machine code (Contains zeros and ones) for processing purposes. Now how do we give instructions to computers when they don’t understand English, Spanish, or French.

Software professionals came up with the idea of high-level languages using which we can give instructions to CPUs to perform some activity. High-level languages are built with simple keywords from plain English and convey special instructions to CPUs for each keyword. Therefore, we write a set of statements in any high-level language to perform some computational activity.

This set of statements(code) is called a program. In the software field, this activity is known as coding or programming. With that background information, we will see one more basic concept.

Compiler vs interpreter

Software tools are available to convert high-level language code into machine code. Now we will explore two tools that convert high-level language code into machine code.

The compiler converts your entire code into machine code and stores it as an executable format file. In the Windows operating system, the popular format is known as .exe format. Windows OS will use this executable file to launch the program whenever we double-click the executable file. Java, C, and C++ are compiler-based programming languages.

Python is an interpreted high-level language. Interpreter translates one line of code at a time into machine code then executes it before converting the next line of source code. Perl, PHP, and Ruby are some more examples of interpreter based languages. Python interpreters are available in major operating systems.

Let us see a small Python program. We can call it hello.py where it contains only one line.

print(“hello”)

This is the code requires to print “hello” in screen. In other languages, you need to write at least three or four lines to do the same work.

Python is globally well supported and adopted by the tech community

Some languages are designed to solve specific sets of problems. For example, Structured Query language (SQL) is meant for working with databases. LISP favored Artificial Intelligence related research and FORTRAN was developed for scientific and engineering applications.

Python is a general-purpose language. You could use Python in different domains such as the data science field, web development, application development, game development, information security field, system administration, image processing, multimedia, IoT, and machine learning.

Python key features

If you want to know whether Python has advanced features of a typical high-level language. Yes, it does. Python supports important language features: dynamic typing, late binding, garbage collection, exception handling and more than 200,000 packages with a wide range of functionality. Python is very stable and has been here for three decades.

Python also supports different programming approaches. For example, Python supports structured programming, object-oriented programming, functional programming, and aspect-oriented programming.

Guido van Rossum released Python in 1991. The name Python came from the British comedy group Monty Python. Python maintains two releases of Python namely Python 2.x and Python 3.x. Python 2.x is officially being discontinued from 01 Jan 2020. Python 3.8 is the latest version.

Python is free and has strong support from the python global community. A non-profit organization, the Python Software Foundation, manages and directs resources for Python. Stackoverflow developer survey results say that Python is the most popular and wanted general-purpose language.

Most popular technologiesMost wanted languages

Source: Stack Overflow’s annual Developer Survey

Why is Python so popular?

Now I am going to share more details on why Python is popular among job seekers.

Software related services provide employment to millions of people across the globe. Candidates are recruited for different roles in software development. Here below I have listed some of the roles from the software industry where Python skills are important.

Python developer/engineer: As a Python developer, you will get the opportunity to work in different jobs. You will be working on the design and development of front end and back end components. You can work on website development with exposure to Django framework or flask framework. Exposure to Databases such as MySQL, MongoDB is desirable with SQL knowledge.

Python automation tester: Software testers can use Selenium with Python and pytest for testing Automation.

System administrator: In operations, Python is used heavily as a scripting language. Python can be used to automate DevOps and routine day to day activities. In the AWS cloud environment, Boto is used as Amazon Python SDK. Boto is used to create, configure and manage AWS services such as EC2, Identity management, S3, SES, SQS, AWSKMS, etc. Open stack cloud is written in Python.

Python for managers/business people: Non-technology people can learn beginner level python to organize and analyze huge amounts of data using the Panda data analysis library. This will help them to make meaningful data-driven decisions.

Researchers: Those who belong to the research community can use python module NumPy for scientific computing. More than that exploration of statistical learning can be performed using the scikit-learn library.

Cybersecurity analyst/security consultant: Python can be used to write penetration testing scripts, information gathering and automating purposes. For those of you who are familiar with Kali Linux, many scripts are written in Python.

Data science engineer: Python is a known data science packages. scikit-learn and matplotlib are some of the packages that are useful in data science. Python also supports various big data tools as well. Artificial intelligence is predicted as the future of technology. Python is the go-to language for career choice in data science.

Internet of things(IoT) developer: Python is emerging as a language choice of developers.

2019 IoT developer survey by Eclipse IoT working group lists Python, C, and Java as preferred languages in IoT environments.

Source: 2019 IoT developer survey by Eclipse IoT working group

Python is also listed on the PopularitY of Programming Language (PYPL) index as a popular programming language.

Python ranking

Source: PopularitY of Programming Language (PYPL) 

If you get a chance to go through Python openings in any of the job sites, you will find more roles where Python knowledge is a must.

Python is widely used in different application domains

Python is used in many application domains. Here is a list of a few.

Web and internet development

Python offers many choices for web development:

  • Frameworks such as Django and Pyramid.
  • Micro-frameworks such as Flask and Bottle.
  • Advanced content management systems such as Plone and django CMS.

Python’s standard library supports many internet protocols:

  • HTML, XML, and JSON
  • Email processing.
  • Support for FTP, IMAP, and other internet protocols.
  • Easy-to-use socket interface.

Scientific and numeric

Python is widely used in scientific and numeric computing.

Education

Python is a superb language for teaching programming, both at the introductory level and in more advanced courses. Schools and colleges are started to offer Python as a beginner level course for programming. So there are many openings in teaching as well.

Desktop GUIs

The Tk GUI library is included with most binary distributions of Python.

Software development (DevOps)

Python is often used as a support language for software developers to build control and management, testing, and in many other ways.

Business Applications

Python is also used to build ERP and e-commerce systems:

As per TIOBE Index for December 2019, Python is placed in one of the top three languages to build new systems.

Python rate of adoption and usage is growing fast across industries including manufacturing, academia, electronics, IoT, finance, energy, tech, and government.

Before I conclude, I consider Python the best bet to learn to advance your career and future. What about you? I will meet you in another post to explain the steps involved in installing Python and writing simple code in Python.

References

Applications for Python ( https://www.python.org/about/apps/ )

The post Python: What Is It and Why Is It so Popular? appeared first on Cloud Academy.

Google open-sources the tools needed to make 2FA security keys

The content below is taken from the original ( Google open-sources the tools needed to make 2FA security keys), to continue reading please visit the site. Remember to respect the Author & Copyright.

Security keys are designed to make logging in to devices simpler and more secure, but not everyone has access to them, or the inclination to use them. Until now. Today, Google has launched an open source project that will help hobbyists and hardware…

Admin Essentials: Improving Chrome Browser extension management through permissions

The content below is taken from the original ( Admin Essentials: Improving Chrome Browser extension management through permissions), to continue reading please visit the site. Remember to respect the Author & Copyright.

IT teams often look for best practices on managing extensions to avoid exposing company IP, leaving open security holes and compromising the productivity of end users. Fortunately, there are several options available to admins for extension management in Chrome. I’m going to cover one of them in more detail in this Admin Essentials post. 

Several  configuration options are available to enterprises wanting to manage extensions. Many enterprises are familiar with the more traditional route of blacklisting and whitelisting. But a second approach offers enterprises more granular controls. Instead of managing the extensions themselves, you can block or allow them by their behavior or permissions.

What are extension permissions? 

Permissions are the rights that are needed on a machine or website in order for the extension to function as intended. There are device permissions that need access to devices and site permissions that need access to sites. Some extensions require both.

extension permissions.png

Permissions are declared by the extension developer in the manifest file. Here is an example:

manifest file.png

Take a look at this list of the various permissionsto help you determine what is or isn’t acceptable to be run on your organization’s devices. As a first step towards discovering which extensions are live in your environment, consider Chrome Browser Cloud Management. It has the ability to pull what extensions are present on your enrolled machines as well as what permissions they are using. Here is an example of that view in Chrome Browser Cloud Management:

Chrome Browser Cloud Management.gif

If you’re a G Suite customer, you already have this functionality in the Device Management section of the Admin console.  

Once you’ve done a discovery exercise to learn which extensions are installed on your end users’ machines, and created a baseline of what permissions you will (or won’t) allow in your environment, you can centrally allow or block extensions by those permissions. With this approach, you don’t have to maintain super long whitelists or blacklists. If you couple this with allowing/blocking site permissions, which allows you to designate specific sites where extensions can or cannot run, you add another layer of protection. This approach of blocking runtime hosts makes it so you can block extensions from running on your most sensitive sites while allowing them to run on any other site. 

For a more in depth look at managing extensions, check out this guide (authored by yours truly) that covers all of the different ways of managing extensions. Or watch this video of me and my Google Security colleague, Nick Peterson, at Next 2019 presenting how to get this done. Enjoy, and happy browsing!

Announcing the Cloudflare Access App Launch

The content below is taken from the original ( Announcing the Cloudflare Access App Launch), to continue reading please visit the site. Remember to respect the Author & Copyright.

Announcing the Cloudflare Access App Launch

Announcing the Cloudflare Access App Launch

Announcing the Cloudflare Access App Launch

Every person joining your team has the same question on Day One: how do I find and connect to the applications I need to do my job?

Since launch, Cloudflare Access has helped improve how users connect to those applications. When you protect an application with Access, users never have to connect to a private network and never have to deal with a clunky VPN client. Instead, they reach on-premise apps as if they were SaaS tools. Behind the scenes, Access evaluates and logs every request to those apps for identity, giving administrators more visibility and security than a traditional VPN.

Administrators need about an hour to deploy Access. End user logins take about 20 ms, and that response time is consistent globally. Unlike VPN appliances, Access runs in every data center in Cloudflare’s network in 200 cities around the world. When Access works well, it should be easy for administrators and invisible to the end user.

However, users still need to locate the applications behind Access, and for internally managed applications, traditional dashboards require constant upkeep. As organizations grow, that roster of links keeps expanding. Department leads and IT administrators can create and publish manual lists, but those become a chore to maintain. Teams need to publish custom versions for contractors or partners that only make certain tools visible.

Starting today, teams can use Cloudflare Access to solve that challenge. We’re excited to announce the first feature in Access built specifically for end users: the Access App Launch portal.

The Access App Launch is a dashboard for all the applications protected by Access. Once enabled, end users can login and connect to every app behind Access with a single click.

How does it work?

When administrators secure an application with Access, any request to the hostname of that application stops at Cloudflare’s network first. Once there, Cloudflare Access checks the request against the list of users who have permission to reach the application.

To check identity, Access relies on the identity provider that the team already uses. Access integrates with providers like OneLogin, Okta, AzureAD, G Suite and others to determine who a user is. If the user has not logged in yet, Access will prompt them to do so at the identity provider configured.

Announcing the Cloudflare Access App Launch

Announcing the Cloudflare Access App Launch

When the user logs in, they are redirected through a subdomain unique to each Access account. Access assigns that subdomain based on a hostname already active in the account. For example, an account with the hostname “widgetcorp.tech” will be assigned “widgetcorp.cloudflareaccess.com”.

Announcing the Cloudflare Access App Launch

Announcing the Cloudflare Access App Launch

The Access App Launch uses the unique subdomain assigned to each Access account. Now, when users visit that URL directly, Cloudflare Access checks their identity and displays only the applications that the user has permission to reach. When a user clicks on an application, they are redirected to the application behind it. Since they are already authenticated, they do not need to login again.

In the background, the Access App Launch decodes and validates the token stored in the cookie on the account’s subdomain.

How is it configured?

The Access App Launch can be configured in the Cloudflare dashboard in three steps. First, navigate to the Access tab in the dashboard. Next, enable the feature in the “App Launch Portal” card. Finally, define who should be able to use the Access App Launch in the modal that appears and click “Save”. Permissions to use the Access App Launch portal do not impact existing Access policies for who can reach protected applications.

Announcing the Cloudflare Access App Launch

Announcing the Cloudflare Access App Launch

Administrators do not need to manually configure each application that appears in the portal. Access App Launch uses the policies already created in the account to generate a page unique to each individual user, automatically.

Defense-in-depth against phishing attacks

Phishing attacks attempt to trick users by masquerading as a legitimate website. In the case of business users, team members think they are visiting an authentic application. Instead, an attacker can present a spoofed version of the application at a URL that looks like the real thing.

Take “example.com” vs “examрle.com” – they look identical, but one uses the Cyrillic “р” and becomes an entirely different hostname. If an attacker can lure a user to visit “examрle.com”, and make the site look like the real thing, that user could accidentally leak credentials or information.

Announcing the Cloudflare Access App Launch

Announcing the Cloudflare Access App Launch

To be successful, the attacker needs to get the victim to visit that fraudulent URL. That frequently happens via email from untrusted senders.

The Access App Launch can help prevent these attacks from targeting internal tools. Teams can instruct users to only navigate to internal applications through the Access App Launch dashboard. When users select a tile in the page, Access will send users to that application using the organization’s SSO.

Cloudflare Gateway can take it one step further. Gateway’s DNS resolver filtering can help defend from phishing attacks that utilize sites that resemble legitimate applications that do not sit behind Access. To learn more about adding Gateway, in conjunction with Access, sign up to join the beta here.

What’s next?

As part of last week’s announcement of Cloudflare for Teams, the Access App Launch is now available to all Access customers today. You can get started with instructions here.

Interested in learning more about Cloudflare for Teams? Read more about the announcement and features here.

Open Laptop Soon to be Open For Business

The content below is taken from the original ( Open Laptop Soon to be Open For Business), to continue reading please visit the site. Remember to respect the Author & Copyright.

How better to work on Open Source projects than to use a Libre computing device? But that’s a hard goal to accomplish. If you’re using a desktop computer, computer than Libre software is easily achievable, though achievable, thought keeping your entire software stack free of closed source binary blobs might require a little extra work. But if you want a laptop, laptop your options are few indeed. Lucky for us, us soon there may be another device in the mix soon, because as [Lukas Hartmann] has just about finalized the MNT Reform.

Since we started eagerly watching the Reform a couple years ago the hardware world has kept keep turning, and the Reform has improved accordingly. The i.MX6 series CPU is looking a little peaky now that it’s approaching end of life, and the device has switched to a considerably more capable – but no less free – i.MX8M paired with 4 GB 4GB of DDR4 on a SODIMM-shaped System-On-Module. This particular SOM is notable because the manufacturer freely provides the module schematics, making it easy to upgrade or replace in the future. The screen has been bumped up to a 12.5″ 1080p panel and steps have been taken to make sure it can be driven without blobs in the graphics pipeline.

If you’re worried that the chassis of the laptop may have been left to wither while the goodies inside got all the attention, there’s no reason for concern. Both concern as both have seen substantial improvement. The keyboard now uses the Kailh Choc ultra low profile mechanical switches for great feel in a small package, while the body itself is milled out of aluminum in five pieces. It’s printable as well, if you want to go that route. All in all, pieces (and printable as well). All in all the Reform represents a heroic amount of work and we’re extremely impressed with how far the design has come.

Of course if any of the above piqued your interest full electrical, mechanical and software sources (spread across a few repos) are available for your perusal; follow the links in the blog post for pointers to follow. We’re thrilled to see how production ready the Reform is looking and can’t wait to hear user reports as they make their way into to the wild!

Via [Brad Linder] at Liliputing.

UK begins testing unsupervised autonomous transport pods

The content below is taken from the original ( UK begins testing unsupervised autonomous transport pods), to continue reading please visit the site. Remember to respect the Author & Copyright.

Shoppers at a UK mall have the opportunity to try out autonomous transport pods this week which — in a UK first — operate entirely without supervision. The driverless pods are being tested at the Cribbs Causeway mall in Gloucestershire, and run bet…

Setting up passwordless Linux logins using public/private keys

The content below is taken from the original ( Setting up passwordless Linux logins using public/private keys), to continue reading please visit the site. Remember to respect the Author & Copyright.

Setting up an account on a Linux system that allows you to log in or run commands remotely without a password isn’t all that hard, but there are some tedious details that you need to get right if you want it to work. In this post, we’re going to run through the process and then show a script that can help manage the details.

Once set up, passwordless access is especially useful if you want to run ssh commands within a script, especially one that you might want to schedule to run automatically.

It’s important to note that you do not have to be using the same user account on both systems. In fact, you can use your public key for a number of accounts on a system or for different accounts on multiple systems.

To read this article in full, please click here

Introducing Google Cloud’s Secret Manager

The content below is taken from the original ( Introducing Google Cloud’s Secret Manager), to continue reading please visit the site. Remember to respect the Author & Copyright.

Many applications require credentials to connect to a database, API keys to invoke a service, or certificates for authentication. Managing and securing access to these secrets is often complicated by secret sprawl, poor visibility, or lack of integrations.

Secret Manager is a new Google Cloud service that provides a secure and convenient method for storing API keys, passwords, certificates, and other sensitive data. Secret Manager provides a central place and single source of truth to manage, access, and audit secrets across Google Cloud. 

Secret Manager offers many important features:

  • Global names and replication: Secrets are project-global resources. You can choose between automatic and user-managed replication policies, so you control where your secret data is stored.

  • First-class versioning: Secret data is immutable and most operations take place on secret versions. With Secret Manager, you can pin a secret to specific versions like 42 or floating aliases like latest.

  • Principles of least privilege: Only project owners have permissions to access secrets. Other roles must explicitly be granted permissions through Cloud IAM.

  • Audit logging: With Cloud Audit Logging enabled, every interaction with Secret Manager generates an audit entry. You can ingest these logs into anomaly detection systems to spot abnormal access patterns and alert on possible security breaches.  

  • Strong encryption guarantees: Data is encrypted in transit with TLS and at rest with AES-256-bit encryption keys. Support for customer-managed encryption keys (CMEK) is coming soon.

  • VPC Service Controls:Enable context-aware access to Secret Manager from hybrid environments with VPC Service Controls.

The Secret Manager beta is available to all Google Cloud customers today. To get started, check out the Secret Manager Quickstarts. Let’s take a deeper dive into some of Secret Manager’s functionality.

Global names and replication

Early customer feedback identified that regionalization is often a pain point in existing secrets management tools, even though credentials like API keys or certificates rarely differ across cloud regions. For this reason, secret names are global within their project.

While secret names are global, the secret data is regional. Some enterprises want full control over the regions in which their secrets are stored, while others do not have a preference. Secret Manager addresses both of these customer requirements and preferences with replication policies.

  • Automatic replication: The simplest replication policy is to let Google choose the regions where Secret Manager secrets should be replicated.

  • User-managed replication: If given a user-managed replication policy, Secret Manager replicates secret data into all the user-supplied locations. You don’t need to install any additional software or run additional services—Google handles data replication to your specified regions. Customers who want more control over the regions where their secret data is stored should choose this replication strategy.

First-class versioning

Versioning is a core tenet of reliable systems to support gradual rollout, emergency rollback, and auditing. Secret Manager automatically versions secret data using secret versions, and most operations—like access, destroy, disable, and enable—take place on a secret version.

Production deployments should always be pinned to a specific secret version. Updating a secret should be treated in the same way as deploying a new version of the application. Rapid iteration environments like development and staging, on the other hand, can use Secret Manager’s latest alias, which always returns the most recent version of the secret.

Integrations

In addition to the Secret Manager API and client libraries, you can also use the Cloud SDK to create secrets:

and to access secret versions:

Discovering secrets

As mentioned above, Secret Manager can store a variety of secrets. You can use Cloud DLP to help find secrets using infoType detectors for credentials and secrets. The following command will search all files in a source directory and produce a report of possible secrets to migrate to Secret Manager:

If you currently store secrets in a Cloud Storage bucket, you can configure a DLP job to scan your bucket in the Cloud Console. 

Over time, native Secret Manager integrations will become available in other Google Cloud products and services.

What about Berglas?

Berglas is an open source project for managing secrets on Google Cloud. You can continue to use Berglas as-is and, beginning with v0.5.0, you can use it to create and access secrets directly from Secret Manager using the sm:// prefix.

If you want to move your secrets from Berglas into Secret Manager, the berglas migrate command provides a one-time automated migration.

Accelerating security

Security is central to modern software development, and we’re excited to help you make your environment more secure by adding secrets management to our existing Google Cloud security product portfolio. With Secret Manager, you can easily manage, audit, and access secrets like API keys and credentials across Google Cloud. 

To learn more, check out the Secret Manager documentation and Secret Manager pricing pages.

Building a Low-Tech Website for Energy Efficiency

The content below is taken from the original ( Building a Low-Tech Website for Energy Efficiency), to continue reading please visit the site. Remember to respect the Author & Copyright.

In an age of flashy jQuery scripts and bulky JavaScript front-end frameworks, loading a “lite” website is like a breath of fresh air. When most of us think of lightweight sites, though, our mind goes to old-style pure HTML and CSS sites or the intentionally barebones websites of developers and academics. Low-tech Magazine, an intentionally low-tech and solar-powered website, manages to incorporate both modern web aesthetics and low-tech efficiency in one go.

Rather than hosting the site on data centers – even those running on renewable power sources – they have a self-hosted site that is run on solar power, causing the site to occasionally go off-line. Their model contrasts with the cloud computing model, which allows more energy efficiency at the user-side while increasing energy expense at data centers. Each page on the blog declares the page size, with an average page weight of 0.77 MB, less than half of the average page size of the top 500,000 most popular blogs in June 2018.

Some of the major choices that have limited the size of the website include building a static site as opposed to a dynamic site, “dithering” images, sparing a logo, staying with default typefaces, and eliminating all third-party tracking, advertising services, and cookies. Their GitHub repository details the front-end decisions  including using unicode characters for the site’s logo rather than embedding an SVG. While the latter may be scalable and lightweight in format it requires distribution to the end-user, which can involve a zipped package with eps, ai, png, and jpeg files in order to ensure the user is able to load the image.

As for the image dithering, the technique allows the website to maintain its characteristic appearance while still minimizing image quality and size. Luckily for Low-tech Magazine, the theme of the magazine allows for black and white images, suitable for dithering. Image sprites are also helpful for minimizing server requests by combining multiple small images into one. Storage-wise, the combined image will take up less memory and only load once.

There are also a few extraneous features that emphasize the website’s infrastructure. The background color indicates the capacity of the solar-charged battery for the website’s server, while other stats about the server’s location (time, sky conditions, forecast) also help with making the website availability in the near future more visible. Who knows, with the greater conscience on environmental impact, this may be a new trend in web design.

ZeroPhone is an open-source smartphone that can be assembled for 50$ in parts based on Raspberry Pi Zero. It is Linux-powered, with UI software written in Python, allowing it to be easily modifiable – and it doesn’t prohibit you from changing the way it works.

The content below is taken from the original ( in /r/ raspberry_pi), to continue reading please visit the site. Remember to respect the Author & Copyright.

http://bit.ly/2T5WSpD

Arduino launches a new modular platform for IoT development

The content below is taken from the original ( Arduino launches a new modular platform for IoT development), to continue reading please visit the site. Remember to respect the Author & Copyright.

Arduino, the open-source hardware platform, today announced the launch of a new low-code platform and modular hardware system for IoT development. The idea here is to give small and medium businesses the tools to develop IoT solutions without having to invest in specialized engineering resources.

The new hardware, dubbed the Arduino Portenta H7,  features everything you’d need to get started with building an IoT hardware platform, including a crypto-authentication chip and communications modules for WiFi, Bluetooth Low Energy and LTE, as well as Narrowband IoT. Powered by 32-bit Arm microcontrollers, either the Cortex-M7 or M4, these low-power modules are meant for designing industrial applications, as well as edge processing solutions and robotics applications. It’ll run Arm’s Mbed OS and support Arduino code, as well as Python and Javascript applications.

“SMBs with industrial requirements require simplified development through secure development tools, software and hardware to economically realize their IoT use cases,” said Charlene Marini, the VP of strategy for Arm’s IoT Services Group. “The combination of Mbed OS with Cortex-M IP in the new Arduino Portenta Family will enable Arduino’s millions of developers to securely and easily develop and deploy IoT devices from prototypes through to production.”

The new H7 module is now available to beta testers, with general availability slated for February 2020.

DMCA-Locked Tractors Make Decades-Old Machines the New Hotness

The content below is taken from the original ( DMCA-Locked Tractors Make Decades-Old Machines the New Hotness), to continue reading please visit the site. Remember to respect the Author & Copyright.

It’s fair to say that the hearts and minds of Hackaday readers lie closer to the technology centres of Shenzhen or Silicon Valley than they do to the soybean fields of Minnesota. The common link is the desire to actually own the hardware we buy. Among those working the soil there has been a surge in demand (and consequently a huge price rise) in 40-year-old tractors.

Second-hand farm machinery prices have made their way to the pages of Hackaday due to an ongoing battle between farmers and agricultural machinery manufacturers over who has the right to repair and maintain their tractors. The industry giant John Deere in particular uses the DMCA and end-user licensing agreements to keep all maintenance in the hands of their very expensive agents. It’s a battle we’ve reported on before, and continues to play out across the farmland of America, this time on the secondary market. Older models continue to deliver the freedom for owners to make repairs themselves, and the relative simplicity of the machines tends to make those repairs less costly overall.

Tractors built in the 1970s and 80s continue to be reliable and have the added perk of predating the digital shackles of the modern era. Aged-but-maintainable machinery is now the sweetheart of farm sales. It confirms a trend I’ve heard of anecdotally for a few years now, that relatively new tractors can be worth less than their older DMCA-free stablemates, and it’s something that I hope will also be noticed in the boardrooms. Perhaps this consumer rebellion can succeed against the DMCA where decades of activism and lobbying have evidently failed.

They just don’t build ’em like they used to.


[Image Source: John Deere 2850 by Raf24 CC-BY-SA 3.0]

[Via Hacker News]