FLOSS Project Planets

PyCharm: PyCharm EAP#3 is out!

Planet Python - Thu, 2020-07-02 09:41

PyCharm EAP #3 is out and it’s almost releasing time!! If you are like us you are also looking forward to the end of the month! We have been talking about new features for the last month and today we will take a deeper look into two very exciting ones. For the full list, check our release notes.

Version Control

As we mentioned before, PyCharm 2020.2 will come with full support for GitHub Pull Requests!

What does it mean? It means that you’ll be able to accomplish pretty much all the needed tasks within the entire pull request workflow without leaving your IDE! Assign, manage, and merge pull requests, view the timeline and in-line comments, submit comments and reviews, and accept changes. All from within the PyCharm UI!

Let’s take a deeper look…

New pull request dedicated view

PyCharm now has one dedicated view that shows all the information you need to analyze one or more pull requests. You can simply click on any listed PR and access its information including messages, branches, author, assignee, changed files, commits, timeline, comments, and more.

View the results of pre-commit checks in the Timeline

At the bottom of the timeline, you find a panel showing the results of your checks as they appear, helping you with reviewing your pull requests and fixing issues.

Start, request and submit reviews from within PyCharm

Reviews are a very important step in this flow, and in the new UI you have everything you need to perform tasks in every stage of your reviewing process. Add/remove comments, use the dedicated window to check differences between files, resolve issues, and do a lot more without leaving PyCharm.

Merge your pull requests from within the PyCharm

Merging your pull request into master was not easy until PyCharm 2020.1. Even though it was possible with some workarounds, the process was not straightforward. It has changed in 2020.2. Now you can easily merge your PR as well as rebase & merge or squash & merge.

We are excited about the new PR flow, and we will bring more information about what else is supported in the future. For now, let’s talk about another very nice new feature that we are very proud of.

Debug failed tests

Talking about coding and not talking about testing is not a good idea. Even though a lot of Python developers don’t write tests regularly, we believe that it should be a very important part of professional developers’ workflow.

When tests are passing it’s all happiness, but what happens when they fail? Well, for those of you who write tests and run it under the debugger we have very nice news! PyCharm can now automatically stop on an exception breakpoint in your test without needing you to explicitly set it beforehand.

It means that when your test fails and you are running under the debugger PyCharm will understand it, stop the execution and show you exactly where the problem is happening to provide a shorter feedback loop for debugging problems in failed tests. Check it out:

New in PyCharm Try it now!

Download this EAP from our website. Alternatively, you can use the JetBrains Toolbox App to stay up to date throughout the entire EAP. If you’re on Ubuntu 16.04 or later, you can use snap to get PyCharm EAP and stay up to date. You can find the installation instructions on our website.

Categories: FLOSS Project Planets

Srijan Technologies: What’s New in Drupal 9 and Why Do You Need To Upgrade

Planet Drupal - Thu, 2020-07-02 09:15

Drupal 9 was launched on June 3, 2020. Given this, it would be necessary for enterprises to upgrade to it later or sooner to acquire complete functionality and retain the ability to receive security updates within the bi-yearly cycles.

Categories: FLOSS Project Planets

Philippe Normand: Web-augmented graphics overlay broadcasting with WPE and GStreamer

Planet Python - Thu, 2020-07-02 09:00

Graphics overlays are everywhere nowadays in the live video broadcasting industry. In this post I introduce a new demo relying on GStreamer and WPEWebKit to deliver low-latency web-augmented video broadcasts.

Readers of this blog might remember a few posts about WPEWebKit and a GStreamer element we at Igalia worked on …

Categories: FLOSS Project Planets

OpenSense Labs: Reasons Why Drupal Is The Best Fit For Your E-Commerce Website

Planet Drupal - Thu, 2020-07-02 08:08
Reasons Why Drupal Is The Best Fit For Your E-Commerce Website Tuba Ayyubi Thu, 07/02/2020 - 17:38


Evolving technologies and marketing strategies have changed the way shopping is experienced. With time, the charm and challenges of eCommerce have increased. How do you plan to overcome these challenges?

As an online brand, you have your challenges when eyeing expansion and opportunities. To achieve the right numbers it is important to engage with customers and sell quality products, all through the right platform.

Talking about the right platform, you can always trust Drupal. Drupal is a content management system with hundreds of modules and themes ready to drive your business online. Drupal adds the magic that your website needs.

The State Of Digital Commerce

Drupal provides amazing features for your eCommerce website, but before jumping to that, let’s take a glance at some stats and understand where the eCommerce industry is heading.

According to Statista, online sales reached $2.5 trillion for the global eCommerce market at the end of 2019 and represented 14% of its global market share. The same data says that by the end of 2020, global commerce sales are predicted to reach $4.2 trillion and the representation will increase to 16%.

Source: Statista

The way that people have been shopping online has changed. Keeping up with trends is important for the growth in the retail landscape of 2020. The future looks bright for eCommerce in the coming time.

Personalization is the key if you want to earn the trust of your customers and give them an experience that makes them come back to your website again. Contactless payment has become the shopping trend and has been continuing for a long time. People prefer paying online instead of cash on delivery. So, providing diverse options for payments is important to keep your customers’ experience a happy one. Subscriptions are an ongoing trend that has helped brands get a lot of long term customers. Similarly, Chatbots have been a great help in enhancing the experience of the users. Experts have predicted that 80% of businesses will be using chatbots by the end of 2020. Voice search has become popular with time. 26.1% of consumers have made a purchase on a smart speaker in 2019. 

To leverage all these ongoing trends, and drive sales of your product online, you need a robust and future-ready eCommerce website and Drupal is ready to help!

Why Using Drupal Brings You A Lot Of Benefits

One of the most comprehensive open-source CMSes available, Drupal, is the perfect fit for eCommerce businesses. It gives you the perfect way of modeling your content, integrated marketing, payment, and fulfillment tools, which helps in bringing in a bigger audience. All the features of Drupal are accessible to merchants of every size.

There are so many brands out there using Drupal for their online business. Here are a few of them:

Honda Brazil

The website of Honda Brazil, built using Drupal, gives the users an engaging experience with easily accessible information.


Timex

With the help of Drupal, Timex, a famous American Watchmaker, is able to provide its customers a seamless, engaging, and consistent online experience.


Lush

Lush, with its website powered by Drupal, has seen dramatic spikes in both online and traffic sales.


Puma

Puma, one of the leading sports brands, has its website built on top of Drupal.


Why do such great brands choose Drupal for their online business? Let’s look at the reasons that show why Drupal is the best fit for your eCommerce website:

Commerce Kickstart

It’s a distribution for the quickest way to get up and running with Drupal for eCommerce features. If you are launching an online store, commerce kickstart is a great resource that will get you up and running with the production environment. 

Commerce Kickstart is made for modern PHP lovers and is available only for Drupal 7. The categories in this distribution include shipping and payment providers, data migration, search tools, product catalogs, etc.

Drupal Commerce

Drupal Commerce is a dedicated solution for your eCommerce needs. It is basically a set of modules for Drupal that enable the host of eCommerce features. Drupal Commerce being a framework itself, focuses on the solutions that can be built by using it. In simpler words, Drupal Commerce brings to your website the basic functions like order, product details, cart item, and payment options.

There are many features of Drupal Commerce that are further extended with the help of modules. Here are a few of them:

  • Modules like Commerce Stock and Commerce Inventory make inventory management easy. 
  • Commerce shipping is Drupal commerce contributed module that is used in cases where the shipping address and the billing address is different by making use of the customer profile.
Essential modules for an e-commerce site

There are plenty of Drupal modules that can be added to your eCommerce site and will help you in building intuitive and powerful websites. Here are some of the modules provided by Drupal for eCommerce:

  • Commerce Shipping takes care of the shipping rate calculation system for Drupal Commerce. It is used with the combination of other shipping method modules like Commerce Flat Rate, Commerce UPS, etc.
  • The Currency module helps your website with currency conversion and information and does the work of displaying the price of the product.
  • Commerce Stripe makes sure that the customers can pay securely without having to leave your website.
Essential themes for an e-commerce site

The first thing that attracts a user when they visit your website is the appearance of your website. Drupal provides amazing themes for your eCommerce websites which come in handy.

  • eStore is Bootstrap based and easy to install and is designed in a way that it solves any eCommerce website’s needs.
  • Flexi Cart is a global theme that makes sure that your products sell fast and easily online.
  • Belgrade is a Drupal Commerce template specially designed to create business websites.
  • SShop is among those Drupal 8 themes which are responsible for providing the users with inbuilt support for Drupal Commerce.
Content-Driven Commerce

Content marketing is the most popular way and for sure gets you the best SEO results. A good story behind your brand will definitely drive sales for you. If the content on your website is engaging, the users will keep coming back to your website.

The stories can be anything that relates to your product. For example, if you are selling lipsticks, you can write an article that says which shade is the perfect one for your different colored outfits.

It is really important to decide the kind of content you want to post on your website. Your content can include blog posts, ebooks, guides, tips, hacks, etc.

Drupal covers the need for content-driven experience. No matter what the case may be, content types are at the core of Drupal that include, mobile editing, in-place authoring, easy content authoring, content revisioning and workflows, and modules for multimedia content.

Headless Commerce

Headless Commerce, which acts as a great catalyst to upscale content-driven commerce, gives immense flexibility to create a great shopping experience for the users. It is future-focused and stays relevant. JavaScript interface communicates with backend Drupal via REST API. Also, in Decoupled Drupal, there is a separation between the presentation layer and eCommerce backend.

Headless Drupal commerce comes with a lot of benefits including high speed, interactive features, and freedom in front-end changes. These features provide a great shopping experience to the customers online by providing a content-rich experience.

Read our article on the implementation of Decoupled Drupal Commerce and React Native to learn more about the benefits of a headless commerce approach.

Performance

It is important to take into account the speed of your website. It is seen that a site that loads in five seconds has 70% longer average conversions. A slow website will deter your efforts and investments. 79% of the shoppers who faced the slow- loading issue say that they don’t return to the websites. These bounces bring a direct effect on revenue generation.

To maintain a top-notch web performance, Drupal comes packed with plenty of offerings. Some of them include:

  • Blazy module helps the pages load faster and saves data usage if the user is not using the whole page.
  • CDN module helps with the integration of Content Delivery Network for the websites and mitigate the page load time and rapidly deliver the web page components
  • In case, your server hardware is reaching its limits, Drupal gives you the option to upgrade the server hardware for a faster way of scaling.
Mobile Ready

If your website runs smoothly on mobile devices, it will be able to run better on other devices too! Creating user scenarios will help you understand what kind of content the user will appreciate on their mobile. This approach will help you design the important elements required for your website.

Mobile compatibility has become an irreplaceable feature for any eCommerce site. In today’s world, everything needs to be mobile-ready. Drupal’s websites not only wow the clients by their looks but also by their mobile responsive design. Drupal websites are easily accessible on mobile and tablets.

Multilingual

The world is on the internet, and with so many people using similar platforms and so many brands expanding globally, multilingual websites are the sine qua non! 

China has the highest number of internet users which is a massive 772 million. Although the maximum number of people on the internet prefer English as their language, 10 other languages that account for 90% of the top 10 million websites.

Source: Internet World Stats

Drupal is the best choice for your multilingual website. It provides numerous languages to choose from and 4 core modules specially designed for language and translation support. This feature by Drupal has shown great results that include higher conversions, rise in SEO, unrivaled translation workflows and has also been a great help in widening the audience. It also allows the detection of the user’s preferred language by identifying users’ IP address, sessions, browser settings, etc.

Personalization

Every eCommerce brand wants to make sure that the content created by them leaves a mark on the users’ minds. And it has become a necessity today because there is a lot of competition out there. Hence, personalized content makes the user experience better and helps create trust between you and the customer.

According to an Adobe report on personalization, 92% of the B2B marketers say that personalization is important.

This is the marketing opportunity that no eCommerce business should miss out on. Tapping the different demographics and varied audiences not only improves your market reach but your bottom line as well. 

Following are examples of modules that can aid your web personalization efforts:

  • The Smart Content module gives real-time anonymous personalization for the users. It also allows the site administrators to display different content for anonymous users based on browser conditions
  • Acquia Lift Connector module helps organizations in delivering personalized content and experience across all platforms and devices by merging content and customer data into one tool. 
SEO

E-commerce websites are buried with huge data. While for a consumer, it might be a desirable situation, for a marketeer it increases the burden of implementing SEO on every page and indexing every product. 

Drupal has various modules that help in improving the SEO of your eCommerce website. Some of them are:

  • Pathauto is an SEO module that ensures that the URL of your website is search engine friendly. It converts complex URLs to simpler ones.
  • Metatag module is a multilingual module and controls all the metatags on all the web pages.
  • XML Sitemap module provides you the resilience to exclude or include a few pages on your Sitemap.
Security

With the increase in cases of hacking and security breaches, basic security do-it-yourselves are not sufficient. The security breaches affect your brand image and your market shares and stock price. According to a report, more than $3.5 Billion was lost to Cyber Crime in 2019. 

Drupal has a dedicated team that regularly works on the security side of it. It is frequently tested for issues and bugs. Drupal also provides various security modules for your eCommerce website. Some of them are:

  • The Password policy module provides the password policies that help users to create a strong password. The password entered by the user is not accepted until it meets the constraints set by this module. 
  • Security Kit module provides various security- hardening options. This helps in reducing different risks coming from different web applications.
  • Two-factor authentication module is a second step for your security check, where a set of codes is defined for a user to be able to sign in. 

If you open a webpage from your mobile device and at the same time open it on your PC/laptop, you will be forced to close one of the pages. The session limit module does the same work of limiting the number of sessions by a user at the same time.

To Sum Up

The substantial development in the concept of ‘eCommerce’ has kept the online brands on their toes. And this is where Drupal provides its unmatched services for your eCommerce platform. 

Be it building your eCommerce website or migrating to Drupal, we at OpenSense Labs will help you do your job smoothly until you get a desirable finish.

Feel free to contact us at hello@opensenselabs.com to drive sales on your website!

blog banner blog image Drupal Ecommerce website Ecommerce Drupal websites Drupal Commerce Blog Type Articles Is it a good read ? On
Categories: FLOSS Project Planets

Agaric Collective: Drupal migrations reference: List of configuration options in YAML definition files

Planet Drupal - Thu, 2020-07-02 08:00

In today’s article we are going to provide a reference of all configuration options that can be set in migration definition files. Additional configuration options available for migrations defined as configuration will also be listed. Finally, we present the configuration options for migrations groups.

General configuration keys

The following keys can be set for any Drupal migration.

id key

A required string value. It serves as the identifier for the migration. The value should be a machine name. That is, lowercase letters with underscores instead of spaces. The value is for creating mapping and messages tables. For example, if the id is ud_migrations, the Migrate API will create the following migrations migrate_map_ud_migrations and migrate_message_ud_migrations.

label key

A string value. The human-readable label for the migration. The value is used in different interfaces to refer to the migration.

audit key

A boolean value. Defaults to FALSE. It indicates whether the migration is auditable. When set to TRUE, a warning is displayed if entities might be overridden when the migration is executed. For example, when doing an upgrade from a previous version of Drupal, nodes created in the new site before running the automatic upgrade process would be overridden and a warning is logged. The Migrate API checks if the highest destination ID is greater than the highest source ID.

migration_tags key

An array value. It can be set to an optional list of strings representing the tags associated with the migration. They are used by the plugin manager for filtering. For example, you can import or rollback all migrations with the Content tag using the following Drush commands provided by the Migrate Tools module:

$ drush migrate:import --tag='Content' $ drush migrate:rollback --tag='Content'source key

A nested array value. This represents the configuration of the source plugin. At a minimum, it contains an id key which indicates which source plugin to use for the migration. Possible values include embedded_data for hardcoded data; csv for CSV files; url for JSON feeds, XML files, and Google Sheets; spreadsheet for Microsoft Excel and LibreOffice Calc files; and many more. Each plugin is configured differently. Refer to our list of configuration options for source plugins to find out what is available for each of them. Additionally, in this section you can define source contents that can be later used in the process pipeline.

process key

A nested array value. This represents the configuration of how source data will be processed and transformed to match the expected destination structure. This section contains a list of entity properties (e.g. nid for a node) and fields (e.g. field_image in the default article content type). Refer to our list of properties for content entities including Commerce related entities to find out which properties can be set depending on your destination (e.g. nodes, users, taxonomy terms, files and images, paragraphs, etc.). For field mappings, you use the machine name of the field as configured in the entity bundle. Some fields have complex structures so you migrate data into specific subfields. Refer to our list of subfields per field type to determine which options are available. When migrating multivalue fields, you might need to set deltas as well. Additionally, you can have pseudofields to store temporary values within the process pipeline.

For each entity property, field, or pseudofield, you can use one or more process plugins to manipulate the data. Many of them are provided by Drupal core while others become available when contributed modules are installed on the site like Migrate Plus and Migrate Process Extra. Throughout the 31 days of migrations series, we provided examples of how many process plugins are used. Most of the work for migrations will be devoted to configuring the right mappings in the process section. Make sure to check our debugging tips in case some values are not migrated properly.

destination key

A nested array value. This represents the configuration of the destination plugin. At a minimum, it contains an id key which indicates which destination plugin to use for the migration. Possible values include entity:node for nodes, entity:user for users, entity:taxonomy_term for taxonomy terms, entity:file for files and images, entity_reference_revisions:paragraph for paragraphs, and many more. Each plugin is configured differently. Refer to our list of configuration options for destination plugins to find out what is available for each of them.

This is an example migration from the ud_migrations_csv_source module used in the article on CSV sources.

id: udm_csv_source_paragraph label: 'UD dependee paragraph migration for CSV source example' migration_tags: - UD CSV Source - UD Example source: plugin: csv path: modules/custom/ud_migrations/ud_migrations_csv_source/sources/udm_book_paragraph.csv ids: [book_id] header_offset: null fields: - name: book_id - name: book_title - name: 'Book author' process: field_ud_book_paragraph_title: book_title field_ud_book_paragraph_author: 'Book author' destination: plugin: 'entity_reference_revisions:paragraph' default_bundle: ud_book_paragraphmigration_dependencies key

A nested array value. The value is used by the Migrate API to make sure the listed migrations are executed in advance of the current one. For example, a node migration might require users to be imported first so you can specify who is the author of the node. Also, it is possible to list optional migrations so that they are only executed in case they are present. The following example from the d7_node.yml migration shows how key can be configured:

migration_dependencies: required: - d7_user - d7_node_type optional: - d7_field_instance - d7_comment_field_instance

To configure the migration dependencies you specify required and optional subkeys whose values are an array of migration IDs. If no dependencies are needed, you can omit this key. Alternatively, you can set either required or optional dependencies without having to specify both keys. As of Drupal 8.8 an InvalidPluginDefinitionException will be thrown if the migration_dependencies key is incorrectly formatted.

class key

A string value. If set, it should point to the class used as the migration plugin. The MigrationPluginManager sets this key to \Drupal\migrate\Plugin\Migration by default. Whatever class specified here should implement the MigrationInterface. This configuration key rarely needs to be set as the default value can be used most of the time. In Drupal core there are few cases where a different class is used as the migration plugin:

deriver key

A string value. If set, it should point to the class used as a plugin deriver for this migration. This is an advanced topic that will be covered in a future entry. In short, it is a mechanism in which new migration plugins can be created dynamically from a base template. For example, the d7_node.yml migration uses the D7NodeDeriver to create one node migration per content type during a Drupal upgrade operation. In this case, the configuration key is set to Drupal\node\Plugin\migrate\D7NodeDeriver. There are many other derivers used by the Migrate API including D7NodeDeriver, D7TaxonomyTermDeriver, EntityReferenceTranslationDeriver, D6NodeDeriver, and D6TermNodeDeriver.

field_plugin_method key

A string value. This key must be set only in migrations that use Drupal\migrate_drupal\Plugin\migrate\FieldMigration as the plugin class. They take care of importing fields from previous versions of Drupal. The following is a list of possible values:

  • alterFieldMigration as set by d7_field.yml.
  • alterFieldFormatterMigration as set by d7_field_formatter_settings.yml.
  • alterFieldInstanceMigration as set by d7_field_instance.yml.
  • alterFieldWidgetMigration as set by d7_field_instance_widget_settings.yml

There are Drupal 6 counterparts for these migrations. Note that the field_plugin_method key is a replacement for the deprecated cck_plugin_method key.

provider key

An array value. If set, it should contain a list of module machine names that must be enabled for this migration to work. Refer to the d7_entity_reference_translation.yml and d6_entity_reference_translation.yml migrations for examples of possible values. This key rarely needs to be set. Usually the same module providing the migration definition file is the only one needed for the migration to work.

Deriver specific configuration keys

It is possible that some derivers require extra configuration keys to be set. For example, the EntityReferenceTranslationDeriver the target_types to be set. Refer to the d7_entity_reference_translation.yml and d6_entity_reference_translation.yml migrations for examples of possible values. These migrations are also interesting because the source, process, and destination keys are not configured in the YAML definition files. They are actually set dynamically by the deriver.

Migration configuration entity keys

The following keys should be used only if the migration is created as a configuration entity using the Migrate Plus module. Only the migration_group key is specific to migrations as configuration entities. All other keys apply for any configuration entity in Drupal. Refer to the ConfigEntityBase abstract class for more details on how they are used.

migration_group key

A string value. If set, it should correspond to the id key of a migration group configuration entity. This allows inheriting configuration values from the group. For example, the database connection for the source configuration. Refer to this article for more information on sharing configuration using migration groups. They can be used to import or rollback all migrations within a group using the following Drush commands provided by the Migrate Tools module:

$ drush migrate:import --group='udm_config_group_json_source' $ drush migrate:rollback --group='udm_config_group_json_source'uuid key

A string value. The value should be a UUID v4. If not set, the configuration management system will create a UUID on the fly and assign it to the migration entity. Refer to this article for more details on setting UUIDs for migrations defined as configuration entities.

langcode key

A string value. The language code of the entity's default language. English is assumed by default. For example: en.

status key

A boolean value. The enabled/disabled status of the configuration entity. For example: true.

dependencies key

A nested array value. Configuration entities can declare dependencies on modules, themes, content entities, and other configuration entities. These dependencies can be recalculated on save operations or enforced. Refer to the ConfigDependencyManager class’ documentation for details on how to configure this key. One practical use of this key is to automatically remove the migration (configuration entity) when the module that defined it is uninstalled. To accomplish this, you need to set an enforced module dependency on the same module that provides the migration. This is explained in the article on defining Drupal migrations as configuration entities. For reference, below is a code snippet from that article showing how to configure this key:

uuid: b744190e-3a48-45c7-97a4-093099ba0547 id: udm_config_json_source_node_local label: 'UD migrations configuration example' dependencies: enforced: module: - ud_migrations_config_json_sourceMigration group configuration entity keys

Migration groups are also configuration entities. That means that they can have uuid, langcode, status, and dependencies keys as explained before. Additionally, the following keys can be set. These other keys can be set for migration groups:

id key

A required string value. It serves as the identifier for the migration group. The value should be a machine name.

label key

A string value. The human-readable label for the migration group.

description key

A string value. More information about the group.

source_type key

A string value. Short description of the type of source. For example: "Drupal 7" or "JSON source".

module key

A string value. The machine name of a dependent module. This key rarely needs to be set. A configuration entity is always dependent on its provider, the module defining the migration group.

shared_configuration key

A nested array value. Any configuration key for a migration can be set under this key. Those values will be inherited by any migration associated with the current group. Refer to this article for more information on sharing configuration using migration groups. The following is an example from the ud_migrations_config_group_json_source module from the article on executing migrations from the Drupal interface.

uuid: 78925705-a799-4749-99c9-a1725fb54def id: udm_config_group_json_source label: 'UD Config Group (JSON source)' description: 'A container for migrations about individuals and their favorite books. Learn more at https://understanddrupal.com/migrations.' source_type: 'JSON resource' shared_configuration: dependencies: enforced: module: - ud_migrations_config_group_json_source migration_tags: - UD Config Group (JSON Source) - UD Example source: plugin: url data_fetcher_plugin: file data_parser_plugin: json urls: - modules/custom/ud_migrations/ud_migrations_config_group_json_source/sources/udm_data.json

What did you learn in today’s article? Did you know there were so many configuration options for migration definition files? Were you aware that some keys apply only when migrations are defined as configuration entities? Have you used migrations groups to share configuration across migrations? Share your answers in the comments. Also, I would be grateful if you shared this blog post with friends and colleagues.

Read more and discuss at agaric.coop.

Categories: FLOSS Project Planets

Russell Coker: Desklab Portable USB-C Monitor

Planet Debian - Thu, 2020-07-02 06:42

I just got a 15.6″ 4K resolution Desklab portable touchscreen monitor [1]. It takes power via USB-C and video input via USB-C or mini HDMI, has touch screen input, and has speakers built in for USB or HDMI sound.

PC Use

I bought a mini-DisplayPort to HDMI adapter and for my first test ran it from my laptop, it was seen as a 1920*1080 DisplayPort monitor. The adaptor is specified as supporting 4K so I don’t know why I didn’t get 4K to work, my laptop has done 4K with other monitors.

The next thing I plan to get is a VGA to HDMI converter so I can use this on servers, it can be a real pain getting a monitor and power cable to a rack mounted server and this portable monitor can be powered by one of the USB ports in the server. A quick search indicates that such devices start at about $12US.

The Desklab monitor has no markings to indicate what resolution it supports, no part number, and no serial number. The only documentation I could find about how to recognise the difference between the FullHD and 4K versions is that the FullHD version supposedly draws 2A and the 4K version draws 4A. I connected my USB Ammeter and it reported that between 0.6 and 1.0A were drawn. If they meant to say 2W and 4W instead of 2A and 4A (I’ve seen worse errors in manuals) then the current drawn would indicate the 4K version. Otherwise the stated current requirements don’t come close to matching what I’ve measured.

Power

The promise of USB-C was power from anywhere to anywhere. I think that such power can theoretically be done with USB 3 and maybe USB 2, but asymmetric cables make it more challenging.

I can power my Desklab monitor from a USB battery, from my Thinkpad’s USB port (even when the Thinkpad isn’t on mains power), and from my phone (although the phone battery runs down fast as expected). When I have a mains powered USB charger (for a laptop and rated at 60W) connected to one USB-C port and my phone on the other the phone can be charged while giving a video signal to the display. This is how it’s supposed to work, but in my experience it’s rare to have new technology live up to it’s potential at the start!

One thing to note is that it doesn’t have a battery. I had imagined that it would have a battery (in spite of there being nothing on their web site to imply this) because I just couldn’t think of a touch screen device not having a battery. It would be nice if there was a version of this device with a big battery built in that could avoid needing separate cables for power and signal.

Phone Use

The first thing to note is that the Desklab monitor won’t work with all phones, whether a phone will take the option of an external display depends on it’s configuration and some phones may support an external display but not touchscreen. The Huawei Mate devices are specifically listed in the printed documentation as being supported for touchscreen as well as display. Surprisingly the Desklab web site has no mention of this unless you download the PDF of the manual, they really should have a list of confirmed supported devices and a forum for users to report on how it works.

My phone is a Huawei Mate 10 Pro so I guess I got lucky here. My phone has a “desktop mode” that can be enabled when I connect it to a USB-C device (not sure what criteria it uses to determine if the device is suitable). The desktop mode has something like a regular desktop layout and you can move windows around etc. There is also the option of having a copy of the phone’s screen, but it displays the image of the phone screen vertically in the middle of the landscape layout monitor which is ridiculous.

When desktop mode is enabled it’s independent of the phone interface so I had to find the icons for the programs I wanted to run in an unsorted list with no search usable (the search interface of the app list brings up the keyboard which obscures the list of matching apps). The keyboard takes up more than half the screen and there doesn’t seem to be a way to make it smaller. I’d like to try a portrait layout which would make the keyboard take something like 25% of the screen but that’s not supported.

It’s quite easy to type on a keyboard that’s slightly larger than a regular PC keyboard (a 15″ display with no numeric keypad or cursor control keys). The hackers keyboard app might work well with this as it has cursor control keys. The GUI has an option for full screen mode for an app which is really annoying to get out of (you have to use a drop down from the top of the screen), full screen doesn’t make sense for a display this large. Overall the GUI is a bit clunky, imagine Windows 3.1 with a start button and task bar. One interesting thing to note is that the desktop and phone GUIs can be run separately, so you can type on the Desklab (or any similar device) and look things up on the phone. Multiple monitors never really interested me for desktop PCs because switching between windows is fast and easy and it’s easy to resize windows to fit several on the desktop. Resizing windows on the Huawei GUI doesn’t seem easy (although I might be missing some things) and the keyboard takes up enough of the screen that having multiple windows open while typing isn’t viable.

I wrote the first draft of this post on my phone using the Desklab display. It’s not nearly as easy as writing on a laptop but much easier than writing on the phone screen.

Currently Desklab is offering 2 models for sale, 4K resolution for $399US and FullHD for $299US. I got the 4K version which is very expensive at the moment when converted to Australian dollars. There are significantly cheaper USB-C monitors available (such as this ASUS one from Kogan for $369AU), but I don’t think they have touch screens and therefore can’t be used with a phone unless you enable the phone screen as touch pad mode and have a mouse cursor on screen. I don’t know if all Android devices support that, it could be that a large part of the desktop experience I get is specific to Huawei devices.

One annoying feature is that if I use the phone power button to turn the screen off it shuts down the connection to the Desklab display, but the phone screen will turn off it I leave it alone for the screen timeout (which I have set to 10 minutes).

Caveats

When I ordered this I wanted the biggest screen possible. But now that I have it the fact that it doesn’t fit in the pocket of my Scott e Vest jacket [2] will limit what I can do with it. Maybe I’ll be buying a 13″ monitor in the near future, I expect that Desklab will do well and start selling them in a wide range of sizes. A 15.6″ portable device is inconvenient even if it is in the laptop format, a thin portable screen is inconvenient in many ways.

Netflix doesn’t display video on the Desklab screen, I suspect that Netflix is doing this deliberately as some misguided attempt at stopping piracy. It is really good for watching video as it has the speakers in good locations for stereo sound, it’s a pity that Netflix is difficult.

The functionality on phones from companies other than Huawei is unknown. It is likely to work on most Android phones, but if a particular phone is important to you then you want to Google for how it worked for others.

Related posts:

  1. Thinkpad X1 Carbon I just bought a Thinkpad X1 Carbon to replace my...
  2. Samsung Galaxy Note 2 A few weeks ago I bought a new Samsung Galaxy...
  3. More About the Thinkpad X301 Last month I blogged about the Thinkpad X301 I got...
Categories: FLOSS Project Planets

Electric Citizen: Introducing DrupalCon Global

Planet Drupal - Thu, 2020-07-02 05:07

DrupalCon North America is an annual tradition, where thousands of people come together in a great American city for a week-long conference of learning, networking and socializing.

Things are different this year, for obvious reasons. But DrupalCon lives on in DrupalCon Global! This is the first-ever virtual edition of DrupalCon. Running from July 14-17, 2020, this online-only conference is open to anyone and everyone, worldwide. If you haven’t done so, consider registering today!

Categories: FLOSS Project Planets

EuroPython: EuroPython 2020: Our keynotes

Planet Python - Thu, 2020-07-02 04:33

We’re happy to announce our keynote lineup for EuroPython 2020.

Guido van Rossum - Q&A

In this session, you’ll get a chance to get your questions answered by Guido van Rossum, our retired BDFL.

In order to submit a question, please use the following Google form: Guido van Rossum Q&A: Question Submission.

Siddha Ganju - 30 Golden Rules of Deep Learning Performance

“Watching paint dry is faster than training my deep learning model.”
“If only I had ten more GPUs, I could train my model in time.”
“I want to run my model on a cheap smartphone, but it’s probably too heavy and slow.”

If this sounds like you, then you might like this talk.

Exploring the landscape of training and inference, we cover a myriad of tricks that step-by-step improve the efficiency of most deep learning pipelines, reduce wasted hardware cycles, and make them cost-effective. We identify and fix inefficiencies across different parts of the pipeline, including data preparation, reading and augmentation, training, and inference.

With a data-driven approach and easy-to-replicate TensorFlow examples, finely tune the knobs of your deep learning pipeline to get the best out of your hardware. And with the money you save, demand a raise!

Naomi Ceder - Staying for the Community: Building Community in the face of Covid-19

Python communities around the world, large and small are facing loss - from the loss of in person meetups and conferences to the loss of employment and even the potential loss of health and life. As communities we are all confronting uncertainty and unanswered questions. In this talk I would like to reflect on some of those questions. What are communities doing now to preserve a sense of community in the face of this crisis? What might we do and what options will we have for coming events? How can we build and foster community and still keep everyone safe? What challenges might we all face in the future? What sources of support can we find? What are our sources of optimism and hope?

Alejandro Saucedo - Meditations on First Deployment: A Practical Guide to Responsible Development

As the impact of software increasingly reaches farther and wider, our professional responsibility as developers becomes more critical to society. The production systems we design, build and maintain often bring inherent adversities with complex technical, societal and even ethical challenges. The skillsets required to tackle these challenges require us to go beyond the algorithms, and require cross-functional collaboration that often goes beyond a single developer. In this talk we introduce intuitive and practical insights from a few of the core ethics themes in software including Privacy, Equity, Trust and Transparency. We cover their importance, the growing societal challenges, and how organisations such as The Institute for Ethical AI, The Linux Foundation, the Association for Computer Machinery, NumFocus, the IEEE and the Python Software Foundation are contributing to these critical themes through standards, policy advise and open source software initiatives. We finally will wrap up the talk with practical steps that any individual can take to get involved and contribute to some of these great open initiatives, and contribute to these critical ongoing discussions.


EuroPython 2020 is waiting for you

We’ve compiled a full program for the event:

Conference tickets are available on our registration page. We hope to see lots of you at the conference from July 23-26. Rest assured that we’ll make this a great event again — even within the limitations of running the conference online.

Enjoy,

EuroPython 2020 Team
https://ep2020.europython.eu/
https://www.europython-society.org/

Categories: FLOSS Project Planets

gnucobol @ Savannah: GnuCOBOL 3.1rc-1 on alpha.gnu.org

GNU Planet! - Thu, 2020-07-02 03:13

While this version is a release-randidate (with an expected full release within 3 months) it is the most stable and complete free COBOL compiler ever available.

Source kits can be found at https://alpha.gnu.org/gnu/gnucobol, the first pre-built binaries are already available and the OS package managers are invited to update their packages.

Compared to the last stable release of 2.2 we have such a huge change list that it is too much to note here for the rc (will be done with the final release and even there the NEWS entry which you can check now at https://sourceforge.net/p/open-cobol/code/HEAD/tree/tags/gnucobol-3.1-rc1/NEWS is compacted).

The RC1 got extensive tests over the lasts months and is downward compatible to GnuCOBOL 2.2 - you are invited to upgrade GnuCOBOL now or start your own tests with the release candidate to be able to update when the final release arrives.

Categories: FLOSS Project Planets

Evgeni Golov: Automatically renaming the default git branch to "devel"

Planet Debian - Thu, 2020-07-02 03:12

It seems GitHub is planning to rename the default brach for newly created repositories from "master" to "main". It's incredible how much positive PR you can get with a one line configuration change, while still working together with the ICE.

However, this post is not about bashing GitHub.

Changing the default branch for newly created repositories is good. And you also should do that for the ones you create with git init locally. But what about all the repositories out there? GitHub surely won't force-rename those branches, but we can!

Ian will do this as he touches the individual repositories, but I tend to forget things unless I do them immediately…

Oh, so this is another "automate everything with an API" post? Yes, yes it is!

And yes, I am going to use GitHub here, but something similar should be implementable on any git hosting platform that has an API.

Of course, if you have SSH access to the repositories, you can also just edit HEAD in an for loop in bash, but that would be boring ;-)

I'm going with devel btw, as I'm already used to develop in the Foreman project and devel in Ansible.

acquire credentials

My GitHub account is 2FA enabled, so I can't just use my username and password in a basic HTTP API client. So the first step is to acquire a personal access token, that can be used instead. Of course I could also have implemented OAuth2 in my lousy script, but ain't nobody have time for that.

The token will require the "repo" permission to be able to change repositories.

And we'll need some boilerplate code (I'm using Python3 and requests, but anything else will work too):

#!/usr/bin/env python3 import requests BASE='https://api.github.com' USER='evgeni' TOKEN='abcdef' headers = {'User-Agent': '@{}'.format(USER)} auth = (USER, TOKEN) session = requests.Session() session.auth = auth session.headers.update(headers) session.verify = True

This will store our username, token, and create a requests.Session so that we don't have to pass the same data all the time.

get a list of repositories to change

I want to change all my own repos that are not archived, not forks, and actually have the default branch set to master, YMMV.

As we're authenticated, we can just list the repositories of the currently authenticated user, and limit them to "owner" only.

GitHub uses pagination for their API, so we'll have to loop until we get to the end of the repository list.

repos_to_change = [] url = '{}/user/repos?type=owner'.format(BASE) while url:     r = session.get(url)     if r.ok:         repos = r.json()         for repo in repos:             if not repo['archived'] and not repo['fork'] and repo['default_branch'] == 'master':                 repos_to_change.append(repo['name'])         if 'next' in r.links:             url = r.links['next']['url']         else:             url = None     else:         url = None create a new devel branch and mark it as default

Now that we know which repos to change, we need to fetch the SHA of the current master, create a new devel branch pointing at the same commit and then set that new branch as the default branch.

for repo in repos_to_change:     master_data = session.get('{}/repos/evgeni/{}/git/ref/heads/master'.format(BASE, repo)).json()     data = {'ref': 'refs/heads/devel', 'sha': master_data['object']['sha']}     session.post('{}/repos/{}/{}/git/refs'.format(BASE, USER, repo), json=data)     default_branch_data = {'default_branch': 'devel'}     session.patch('{}/repos/{}/{}'.format(BASE, USER, repo), json=default_branch_data)     session.delete('{}/repos/{}/{}/git/refs/heads/{}'.format(BASE, USER, repo, 'master'))

I've also opted in to actually delete the old master, as I think that's the safest way to let the users know that it's gone. Letting it rot in the repository would mean people can still pull and won't notice that there are no changes anymore as the default branch moved to devel.

So…

announcement

I've updated all my (those in the evgeni namespace) non-archived repositories to have devel instead of master as the default branch.

Have fun updating!

code #!/usr/bin/env python3 import requests BASE='https://api.github.com' USER='evgeni' TOKEN='abcd' headers = {'User-Agent': '@{}'.format(USER)} auth = (USER, TOKEN) session = requests.Session() session.auth = auth session.headers.update(headers) session.verify = True repos_to_change = [] url = '{}/user/repos?type=owner'.format(BASE) while url:     r = session.get(url)     if r.ok:         repos = r.json()         for repo in repos:             if not repo['archived'] and not repo['fork'] and repo['default_branch'] == 'master':                 repos_to_change.append(repo['name'])         if 'next' in r.links:             url = r.links['next']['url']         else:             url = None     else:         url = None for repo in repos_to_change:     master_data = session.get('{}/repos/evgeni/{}/git/ref/heads/master'.format(BASE, repo)).json()     data = {'ref': 'refs/heads/devel', 'sha': master_data['object']['sha']}     session.post('{}/repos/{}/{}/git/refs'.format(BASE, USER, repo), json=data)     default_branch_data = {'default_branch': 'devel'}     session.patch('{}/repos/{}/{}'.format(BASE, USER, repo), json=default_branch_data)     session.delete('{}/repos/{}/{}/git/refs/heads/{}'.format(BASE, USER, repo, 'master'))
Categories: FLOSS Project Planets

Kristof De Jaeger: Indigenous for iOS, IndieWeb and ActivityPub for Drupal

Planet Drupal - Thu, 2020-07-02 03:08

I'm glad to announce that I've been awarded a grant as part of the European Next Generation Internet initiative (NGI) by the Dutch NLnet Foundation to work on my (currently) favorite projects: Indigenous and IndieWeb1. I didn't count on being selected when I submitted my proposal when looking at the other entries, but I guess I made a good case. I'll be spending a lot of time the following months working on them, so you can expect some exciting releases. The status of all projects and work done within this grant will be tracked here.

Indigenous for iOS

The app was originally started by Edward Hinkle and was the main trigger for me to build the Android equivalent. The project is currently unmaintained and lacks many features which are available in the Android version. Thanks to the grant, I can now revive the project so iOS users will be able to enjoy IndieWeb with a more richer and mature application.

Edward was so kind to transfer the existing repository over to me so all issues are preserved. I'll be creating projects and milestones so everyone can track progress. At some point, I will start rolling out releases in a beta program, so watch this space or announcements on Twitter to know when you can sign up for testing.

Multiple user support for the Drupal IndieWeb module

One of the last major missing pieces for the module is support for multiple users. All features currently work great for one account and the Micropub server supports multiple authors posting to the same domain. However, it's far from perfect, and especially the built-in Microsub server is not compatible at all for more than one user.

Work started in a separate branch a couple of months ago, but progress is slow as dragons are everywhere and I only work on this when I have some free time. With this grant, I'll be able to focus 2 weeks in a row to rewrite the critical pieces, not to mention all the tests.

I haven't decided yet whether I'm going to write an upgrade path, but I will keep on supporting both branches as I'm using the module on my site which only has one user, so no need to worry in case you are using the module already.

Kickstarting ActivityPub module for Drupal

It's been on my mind for so long, but I will finally will be able to work extensively on the Drupal Activity module. My work will happen on Drupal.org instead of the existing repository on GitHub, which will be used for a more extended version somewhere in the future. The 1.0.x branch on d.o will contain the lite version.

Open Web

Besides these 3 major goals, I'll focus as well on the interoperability of both app clients (Android and iOS) with more software, e.g. Mastodon and Pixelfed. I'm brainstorming to figure out the best approach to contribute and how to integrate them with both clients, more details will be released in future blog posts and notes.

All those projects have a place in my personal vision on the Open Web, so I feel incredibly lucky to be able to work on them almost full time, hoping to convince more people to jump onboard ultimately. It would be great if we could get something into Drupal Core one day, or at least make some more noise around it. If you have questions, feedback or just want to have a chat, I'm (still, yes I know) on IRC on irc.freenode.net (indieweb or drupal channels). Ping swentel and I'll be all ears.

Footnotes

1. to be fair, Solfidola might come close to become my new favorite, but it's not related with IndieWeb at all :)

Categories: FLOSS Project Planets

DrupalCon News: Plenary Speakers Announced For Drupalcon Global Open Source Digital Experience Conference

Planet Drupal - Thu, 2020-07-02 00:00

Mitchell Baker and Dries Buytaert join a diverse lineup of speakers at annual conference 
encouraging attendees to “Be Human, Think Digital”
https://drupal.org

Categories: FLOSS Project Planets

Full Stack Python: How to Report Errors in Flask Web Apps with Sentry

Planet Python - Thu, 2020-07-02 00:00

Flask web applications are highly customizable by developers thanks to the framework's extension-based architecture, but that flexibility can sometimes lead to more errors when you run the application due to rough edges between the libraries.

Reporting errors is crucial to running a well-functioning Flask web application, so this tutorial will guide you through adding a free, basic Sentry configuration to a fresh Flask project.

Tutorial Requirements

Ensure you have Python 3 installed, because Python 2 reached its end-of-life at the beginning of 2020 and is no longer supported. Preferrably, you should have Python 3.7 or greater installed in your development environment. This tutorial will also use:

All code in this blog post is available open source under the MIT license on GitHub under the report-errors-flask-web-apps-sentry directory of the blog-code-examples repository. Use the source code as you desire for your own projects.

Development environment set up

Change into the directory where you keep your Python virtual environments. Create a new virtualenv for this project using the following command.

Install the Flask and Sentry-SDK code libraries into a new Python virtual environment using the following commands:

python -m venv sentryflask source sentryflask/bin/activate pip install flask>=1.1.2 sentry-sdk[flask]==0.15.1

Note that we installed the Flask integration as part of the Sentry SDK, which is why the dependency is sentry-sdk[flask] rather than just sentry-sdk.

Now that we have all of our dependencies installed we can code up a little application to show how the error reporting works.

Creating the application

We have everything we need to start building our application. Create a new directory for your project. I've called mine report-errors-flask-web-apps-sentry in the examples repository but you can use a shorter name if you prefer. Open a new file named app.py and write the following code in it.

# app.py from flask import Flask, escape, request app = Flask(__name__) @app.route('/divide/<int:numerator>/by/<int:denominator>/') def hello(numerator, denominator): answer = numerator / denominator return f'{numerator} can be divided by {denominator} {answer} times.'

The above code is a short Flask application that allows input via the URL for two integer values: a numerator and a denominator.

Save the file and run it using the flask run command:

env FLASK_APP=app.py flask run

If you see the following output on the command line that means the development server is working properly:

* Serving Flask app "app.py" * Environment: production WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Debug mode: off * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)

Test it by going to http://localhost:5000/divide/50/by/10/ and you will get the following output in your web browser:

With our base application working, we can now add error reporting for the situations that do not work as expected.

Adding Sentry to our app

It's time to add Sentry with the Flask integration into the mix, so that we can easily see when the route errors out due to bad input.

Sentry can either be self-hosted or used as a cloud service through Sentry.io. In this tutorial we will use the cloud hosted version because it's faster than setting up your own server as well as free for smaller projects.

Go to Sentry.io's homepage.

Sign into your account or sign up for a new free account. You will be at the main account dashboard after logging in or completing the Sentry sign up process.

There are no errors logged on our account dashboard yet, which is as expected because we have not yet connected our account to our Python application.

You'll want to create a new Sentry Project just for this application so click "Projects" in the left sidebar to go to the Projects page.

On the Projects page, click the "Create Project" button in the top right corner of the page.

You can either choose "Flask" or select "Python". I usually just choose "Python" if I do not yet know what framework I'll be using to build my application. Next, give your new Project a name and then press the "Create Project" button. Our new project is ready to integrate with our Python code.

We need the unique identifier for our account and project to authorize our Python code to send errors to this Sentry instance. The easiest way to get what we need is to go to the Python+Flask documentation page and read how to configure the SDK.

Copy the string parameter for the init method and set it as an environment variable rather than having it exposed in your project's code.

export SENTRY_DSN='https://yourkeygoeshere.ingest.sentry.io/project-number'

Make sure to replace "yourkeygoeshere" with your own unique identifier and "project-number" with the ID that matches the project you just created.

Check that the SENTRY_DSN is set properly in your shell using the echo command:

echo $SENTRY_DSN

Update app.py with the following highlighted lines of code.

# app.py ~~import os ~~import sentry_sdk from flask import Flask, escape, request ~~from sentry_sdk.integrations.flask import FlaskIntegration ~~sentry_sdk.init( ~~ dsn=os.getenv('SENTRY_DSN'), integrations=[FlaskIntegration()] ~~) app = Flask(__name__) @app.route('/divide/<int:numerator>/by/<int:denominator>/') def hello(numerator, denominator): answer = numerator / denominator return f'{numerator} can be divided by {denominator} {answer} times.'

The above new lines of code initialize the Sentry client and allow it to properly send any errors that occur over to the right Sentry service.

Testing the Sentry Integration

The Sentry dashboard shows that the service is still waiting for events.

Let's make an error happen to see if we've properly connected the Flask integration with our application.

Try to divide by zero, by going to http://localhost:5000/divide/50/by/0/ in your web browser. You should get an "Internal Server Error".

Back over in the Sentry dashboard, the error appears in the list.

We can drill into the error by clicking on it and get a ton more information, not just about our application but also about the client that visited the site. This is handy if you have an issue in a specific browser or other type of client when building an API.

With that in place, you can now build out the rest of your Flask application knowing that all of the exceptions will be tracked in Sentry.

What's next?

We just finished building a Flask app to show how quickly the hosted version of Sentry can be added to applications so you do not lose track of your error messages.

Next, you can try one of these tutorials to add other useful features to your new application:

You can also determine what to code next in your Python project by reading the Full Stack Python table of contents page.

Questions? Contact me via Twitter @fullstackpython or @mattmakai. I am also on GitHub with the username mattmakai.

If you see an issue or error in this tutorial, please fork the source repository on GitHub and submit a pull request with the fix.

Categories: FLOSS Project Planets

Russell Coker: Isolating PHP Web Sites

Planet Debian - Wed, 2020-07-01 22:12

If you have multiple PHP web sites on a server in a default configuration they will all be able to read each other’s files in a default configuration. If you have multiple PHP web sites that have stored data or passwords for databases in configuration files then there are significant problems if they aren’t all trusted. Even if the sites are all trusted (IE the same person configures them all) if there is a security problem in one site it’s ideal to prevent that being used to immediately attack all sites.

mpm_itk

The first thing I tried was mpm_itk [1]. This is a version of the traditional “prefork” module for Apache that has one process for each HTTP connection. When it’s installed you just put the directive “AssignUserID USER GROUP” in your VirtualHost section and that virtual host runs as the user:group in question. It will work with any Apache module that works with mpm_prefork. In my experiment with mpm_itk I first tried running with a different UID for each site, but that conflicted with the pagespeed module [2]. The pagespeed module optimises HTML and CSS files to improve performance and it has a directory tree where it stores cached versions of some of the files. It doesn’t like working with copies of itself under different UIDs writing to that tree. This isn’t a real problem, setting up the different PHP files with database passwords to be read by the desired group is easy enough. So I just ran each site with a different GID but used the same UID for all of them.

The first problem with mpm_itk is that the mpm_prefork code that it’s based on is the slowest mpm that is available and which is also incompatible with HTTP/2. A minor issue of mpm_itk is that it makes Apache take ages to stop or restart, I don’t know why and can’t be certain it’s not a configuration error on my part. As an aside here is a site for testing your server’s support for HTTP/2 [3]. To enable HTTP/2 you have to be running mpm_event and enable the “http2” module. Then for every virtual host that is to support it (generally all https virtual hosts) put the line “Protocols h2 h2c http/1.1” in the virtual host configuration.

A good feature of mpm_itk is that it has everything for the site running under the same UID, all Apache modules and Apache itself. So there’s no issue of one thing getting access to a file and another not getting access.

After a trial I decided not to keep using mpm_itk because I want HTTP/2 support.

php-fpm Pools

The Apache PHP module depends on mpm_prefork so it also has the issues of not working with HTTP/2 and of causing the web server to be slow. The solution is php-fpm, a separate server for running PHP code that uses the fastcgi protocol to talk to Apache. Here’s a link to the upstream documentation for php-fpm [4]. In Debian this is in the php7.3-fpm package.

In Debian the directory /etc/php/7.3/fpm/pool.d has the configuration for “pools”. Below is an example of a configuration file for a pool:

# cat /etc/php/7.3/fpm/pool.d/example.com.conf [example.com] user = example.com group = example.com listen = /run/php/php7.3-example.com.sock listen.owner = www-data listen.group = www-data pm = dynamic pm.max_children = 5 pm.start_servers = 2 pm.min_spare_servers = 1 pm.max_spare_servers = 3

Here is the upstream documentation for fpm configuration [5].

Then for the Apache configuration for the site in question you could have something like the following:

ProxyPassMatch "^/(.*\.php(/.*)?)$" "unix:/run/php/php7.3-example.com.sock|fcgi://localhost/usr/share/wordpress/"

The “|fcgi://localhost” part is just part of the way of specifying a Unix domain socket. From the Apache Wiki it appears that the method for configuring the TCP connections is more obvious [6]. I chose Unix domain sockets because it allows putting the domain name in the socket address. Matching domains for the web server to port numbers is something that’s likely to be error prone while matching based on domain names is easier to check and also easier to put in Apache configuration macros.

There was some additional hassle with getting Apache to read the files created by PHP processes (the options include running PHP scripts with the www-data group, having SETGID directories for storing files, and having world-readable files). But this got things basically working.

Nginx

My Google searches for running multiple PHP sites under different UIDs didn’t turn up any good hits. It was only after I found the DigitalOcean page on doing this with Nginx [7] that I knew what to search for to find the way of doing it in Apache.

Related posts:

  1. Google and Certbot (Letsencrypt) Like most people I use Certbot AKA Letsencrypt to create...
  2. Passwords Used by Daemons There’s a lot of advice about how to create and...
  3. review of Australian car web sites It seems that Toyota isn’t alone in having non-functional web...
Categories: FLOSS Project Planets

Parabola GNU/Linux-libre: ath9k wifi devices may not work with linux-libre 5.7.6

GNU Planet! - Wed, 2020-07-01 21:49

if you have a USB wifi device which uses the ath9k or ath9k_htc kernel module, you should postpone upgrading to any of the 5.7.6 kernels; or the device may not work when you next reboot - PCI devices do not seem to be affected by this bug

watch this bug report for further details

linux-libre and linux-libre-headers 5.7.6 have been pulled from the repos and replaced with 5.7.2; but other kernels remain at 5.7.6 - if you have already upgraded to one of the 5.7.6 kernels, and your wifi does not work, you will need to revert to the previous kernel:

# pacman -Syuu linux-libre

or boot a parabola LiveISO, mount your / partition, and install it with pacstrap:

# mount /dev/sdXN /mnt # pacstrap /mnt linux-libre
Categories: FLOSS Project Planets

Matt Layman: The Home Stretch - Building SaaS #63

Planet Python - Wed, 2020-07-01 20:00
In this episode, we return to the homeschool application that I’m building. I’m in the final stretch of changes that need to happen to make the product minimally viable. We worked on a template, wrote some model methods, and did a bunch of automated testing. We started by adding students to the context of the students index page. With the students in the context, we updated the index page to display the list of students.
Categories: FLOSS Project Planets

Agaric Collective: Free Drupal 9 webinars on site building, migrations, and upgrades

Planet Drupal - Wed, 2020-07-01 15:00

On Tuesday, July 7, Agaric will host 3 free online webinars about Drupal 9. We invite the community to join us to learn more about the latest version of our favorite CMS. We will leave time at the end of each presentation for questions from the audience. All webinars will be presented online via Zoom. Fill out the form at the end of the post to reserve your seat. We look forward to seeing you.

Getting started Drupal 9

Time: 10:00 AM - 11:00 AM Eastern Time (EDT)

This webinar will cover basic site building concepts. You will learn what is a node and how they differ from content types. We are going to explain why fields are so useful for structuring your site's content and the benefits of doing this. We will cover how to use Views to create listing of content. Layout builder, blocks, taxonomies, and the user permissions system will also be explained.

Introduction to Drupal 9 migrations

Time: 11:30 AM - 12:30 AM Eastern Time (EDT)

This webinar will present an overview of the Drupal migrations system. You will learn about how the Migrate API works and what assumptions it makes. We will explain the syntax to write migrations how different source, process, and destinations plugins work. Recommended migration workflows and debugging tips will also be presented. No previous experience with the Migrate API nor PHP is required to attend.

Drupal 9 upgrades: how and when to move your Drupal 7 sites?

Time: 1:00 PM - 2:00 PM Eastern Time (EDT)

This webinar will present different tools and workflows to upgrade your Drupal 7 site to Drupal 9. We will run through what things to consider when planning an upgrade. This will include how to make site architecture changes, modules that do not have D9 counterparts, what to do when there are no automated upgrade paths.

Agaric is also offering full-day trainings for these topics later this month. Dates, prices, and registration options is available at https://agaric.coop/training

Read more and discuss at agaric.coop.

Categories: FLOSS Project Planets

Python Software Foundation: Announcing the PSF Project Funding Working Group

Planet Python - Wed, 2020-07-01 14:21

For the past 3 years, the PSF has been working on grant funded projects to improve our internal systems and platforms. This work has been done with the Packaging Working Group, and focused on our packaging ecosystem of PyPI and pip. We have been able to show that applying directed funding to open source projects has the ability to dramatically increase the speed of development, and move our community forward in a much more sustained way than relying solely on volunteer effort.
Along with the external grant funding of PSF projects, we have also committed PSF funds in the past to improve developments of community projects. This shows that the experience of directed funding is applicable to our community projects, as well as our own. An example here is the BeeWare project that was given funding via our Education Grants last year:

https://twitter.com/PyBeeWare/status/1273227908136316931
Another wonderful example has been a number of scientific Python projects that have raised large amounts of grant funding, mostly through NumFocus. They have been a large inspiration for our focus on grant funding as an important source of revenue for open source projects. The scientific open source community has been immeasurably improved by this funding, and we hope to expand this opportunity to the entire Python community. Helping the community get funding
The PSF has created the Project Funding Working Group to help our community seek similar funding for their own projects. We hope to expand the amount of money going into the Python community as a whole, by providing resources and advice to projects who are interested in seeking funding from external sources.
Our charter starts with our intended purpose:
This Working Group researches, and advises Python community volunteers on applying for external grants and similar funding to advance the mission of the PSF, which includes, but is not limited to, things such as advancing the Python core, Python-related infrastructure, key Python projects, and Python education and awareness. You can read the entire charter for more information about the vision for the group that we intend to build over the medium and long term. Resources In the short term, the first resource that we have put together is a list of potential funders that are applicable to our community. It’s on GitHub, and we welcome contributions to the list if you know of additional sources of funding. The other initial resource we are able to provide is advice, so if you have any questions about funding, you can email us at project-funding-wg@python.org, and we will do our best to help. We can advise you on picking tasks to propose, making a budget, writing a proposal, and more. We are excited about the possibilities for the Python community when we see more funding being applied to our mission. There is a lot of amazing open source software out there being built by volunteers, and we hope that giving them additional resources will create even more impact for our mission of advancing the Python community.  -- Eric Holscher, co-chair, Project Funding Working Group
Categories: FLOSS Project Planets

GNU Guix: Securing updates

GNU Planet! - Wed, 2020-07-01 13:40

Software deployment tools like Guix are in a key position when it comes to securing the “software supply chain”—taking source code fresh from repositories and providing users with ready-to-use binaries. We have been paying attention to several aspects of this problem in Guix: authentication of pre-built binaries, reproducible builds, bootstrapping, and security updates.

A couple of weeks ago, we addressed the elephant in the room: authentication of Guix code itself by guix pull, the tool that updates Guix and its package collection. This article looks at what we set out to address, how we achieved it, and how it compares to existing work in this area.

What updates should be protected against

The problem of securing distro updates is often viewed through the lens of binary distributions such as Debian, where the main asset to be protected are binaries themselves. The functional deployment model that Guix and Nix implement is very different: conceptually, Guix is a source distribution, like Gentoo if you will.

Pre-built binaries are of course available and very useful, but they’re optional; we call them substitutes because they’re just that: substitutes for local builds. When you do choose to accept substitutes, they must be signed by one of the keys you authorized (this has been the case since version 0.6 in 2014).

Guix consists of source code for the tools as well as all the package definitions—the distro. When users run guix pull, what happens behind the scene is equivalent to git clone or git pull. There are many ways this can go wrong. An attacker can trick the user into pulling code from an alternate repository that contains malicious code or definitions for backdoored packages. This is made more difficult by the fact that code is fetched over HTTPS from Savannah by default. If Savannah is compromised (as happened in 2010), an attacker can push code to the Guix repository, which everyone would pull. The change might even go unnoticed and remain in the repository forever. An attacker with access to Savannah can also reset the main branch to an earlier revision, leading users to install outdated software with known vulnerabilities—a downgrade attack. These are the kind of attacks we want to protect against.

Authenticating Git checkouts

If we take a step back, the problem we’re trying to solve is not specific to Guix and to software deployment tools: it’s about authenticating Git checkouts. By that, we mean that when guix pull obtains code from Git, it should be able to tell that all the commits it fetched were pushed by authorized developers of the project. We’re really looking at individual commits, not tags, because users can choose to pull arbitrary points in the commit history of Guix and third-party channels.

Checkout authentication requires cryptographically signed commits. By signing a commit, a Guix developer asserts that they are the one who made the commit; they may be its author, or they may be the person who applied somebody else’s changes after review. It also requires a notion of authorization: we don’t simply want commits to have a valid signature, we want them to be signed by an authorized key. The set of authorized keys changes over time as people join and leave the project.

To implement that, we came up with the following mechanism and rule:

  1. The repository contains a .guix-authorizations file that lists the OpenPGP key fingerprints of authorized committers.
  2. A commit is considered authentic if and only if it is signed by one of the keys listed in the .guix-authorizations file of each of its parents. This is the authorization invariant.

(Remember that Git commits form a directed acyclic graph (DAG) where each commit can have zero or more parents; merge commits have two parent commits, for instance. Do not miss Git for Computer Scientists for a pedagogical overview!)

Let’s take an example to illustrate. In the figure below, each box is a commit, and each arrow is a parent relationship:

This figure shows two lines of development: the orange line may be the main development branch, while the purple line may correspond to a feature branch that was eventually merged in commit F. F is a merge commit, so it has two parents: D and E.

Labels next to boxes show who’s in .guix-authorizations: for commit A, only Alice is an authorized committer, and for all the other commits, both Bob and Alice are authorized committers. For each commit, we see that the authorization invariant holds; for example:

  • commit B was made by Alice, who was the only authorized committer in its parent, commit A;
  • commit C was made by Bob, who was among the authorized committers as of commit B;
  • commit F was made by Alice, who was among the authorized committers of both parents, commits D and E.

The authorization invariant has the nice property that it’s simple to state, and it’s simple to check and enforce. This is what guix pull implements. If your current Guix, as returned by guix describe, is at commit A and you want to pull to commit F, guix pull traverses all these commits and checks the authorization invariant.

Once a commit has been authenticated, all the commits in its transitive closure are known to be already authenticated. guix pull keeps a local cache of the commits it has previously authenticated, which allows it to traverse only new commits. For instance, if you’re at commit F and later update to a descendant of F, authentication starts at F.

Since .guix-authorizations is a regular file under version control, granting or revoking commit authorization does not require special support. In the example above, commit B is an authorized commit by Alice that adds Bob’s key to .guix-authorizations. Revocation is similar: any authorized committer can remove entries from .guix-authorizations. Key rotation can be handled similarly: a committer can remove their former key and add their new key in a single commit, signed by the former key.

The authorization invariant satisfies our needs for Guix. It has one downside: it prevents pull-request-style workflows. Indeed, merging the branch of a contributor not listed in .guix-authorizations would break the authorization invariant. It’s a good tradeoff for Guix because our workflow relies on patches carved into stone tablets (patch tracker), but it’s not suitable for every project out there.

Bootstrapping

The attentive reader may have noticed that something’s missing from the explanation above: what do we do about commit A in the example above? In other words, which commit do we pick as the first one where we can start verifying the authorization invariant?

We solve this bootstrapping issue by defining channel introductions. Previously, one would identify a channel simply by its URL. Now, when introducing a channel to users, one needs to provide an additional piece of information: the first commit where the authorization invariant holds, and the fingerprint of the OpenPGP key used to sign that commit (it’s not strictly necessary but provides an additional check). Consider this commit graph:

On this figure, B is the introduction commit. Its ancestors, such as A are considered authentic. To authenticate, C, D, E, and F, we check the authorization invariant.

As always when it comes to establishing trust, distributing channel introductions is very sensitive. The introduction of the official guix channel is built into Guix. Users obtain it when they install Guix the first time; hopefully they verify the signature on the Guix tarball or ISO image, as noted in the installation instructions, which reduces chances of getting the “wrong” Guix, but it is still very much trust-on-first-use (TOFU).

For signed third-party channels, users have to provide the channel’s introduction in their channels.scm file, like so:

(channel (name 'my-channel) (url "https://example.org/my-channel.git") (introduction (make-channel-introduction "6f0d8cc0d88abb59c324b2990bfee2876016bb86" (openpgp-fingerprint "CABB A931 C0FF EEC6 900D 0CFB 090B 1199 3D9A EBB5"))))

The guix describe command now prints the introduction if there’s one. That way, one can share their channel configuration, including introductions, without having to be an expert.

Channel introductions also solve another problem: forks. Respecting the authorization invariant “forever” would effectively prevent “unauthorized” forks—forks made by someone who’s not in .guix-authorizations. Someone publishing a fork simply needs to emit a new introduction for their fork, pointing to a different starting commit.

Last, channel introductions give a point of reference: if an attacker manipulates branch heads on Savannah to have them point to unrelated commits (such as commits on an orphan branch that do not share any history with the “official” branches), authentication will necessarily fail as it stumbles upon the first unauthorized commit made by the attacker. In the figure above, the red branch with commits G and H cannot be authenticated because it starts from A, which lacks .guix-authorizations and thus fails the authorization invariant.

That’s all for authentication! I’m glad you read this far. At this point you can take a break or continue with the next section on how guix pull prevents downgrade attacks.

Downgrade attacks

An important threat for software deployment tools is downgrade or roll-back attacks. The attack consists in tricking users into installing older, known-vulnerable software packages, which in turn may offer new ways to break into their system. This is not strictly related to the authentication issue we’ve been discussing, except that it’s another important issue in this area that we took the opportunity to address.

Guix saves provenance info for itself: guix describe prints that information, essentially the Git commits of the channels used during git pull:

$ guix describe Generation 149 Jun 17 2020 20:00:14 (current) guix 8b1f7c0 repository URL: https://git.savannah.gnu.org/git/guix.git branch: master commit: 8b1f7c03d239ca703b56f2a6e5f228c79bc1857e

Thus, guix pull, once it has retrieved the latest commit of the selected branch, can verify that it is doing a fast-forward update in Git parlance—just like git pull does, but compared to the previously-deployed Guix. A fast-forward update is when the new commit is a descendant of the current commit. Going back to the figure above, going from commit A to commit F is a fast-forward update, but going from F to A or from D to E is not.

Not doing a fast-forward update would mean that the user is deploying an older version of the Guix currently used, or deploying an unrelated version from another branch. In both cases, the user is at risk of ending up installing older, vulnerable software.

By default guix pull now errors out on non-fast-forward updates, thereby protecting from roll-backs. Users who understand the risks can override that by passing --allow-downgrades.

Authentication and roll-back prevention allow users to safely refer to mirrors of the Git repository. If git.savannah.gnu.org is down, one can still update by fetching from a mirror, for instance with:

guix pull --url=https://github.com/guix-mirror/guix

If the repository at this URL is behind what the user already deployed, or if it’s not a genuine mirror, guix pull will abort. In other cases, it will proceed.

Unfortunately, there is no way to answer the general question “is X the latest commit of branch B ?”. Rollback detection prevents just that, rollbacks, but there’s no mechanism in place to tell whether a given mirror is stale. To mitigate that, channel authors can specify, in the repository, the channel’s primary URL. This piece of information lives in the .guix-channel file, in the repository, so it’s authenticated. guix pull uses it to print a warning when the user pulls from a mirror:

$ guix pull --url=https://github.com/guix-mirror/guix Updating channel 'guix' from Git repository at 'https://github.com/guix-mirror/guix'... Authenticating channel 'guix', commits 9edb3f6 to 3e51f9e (44 new commits)... guix pull: warning: pulled channel 'guix' from a mirror of https://git.savannah.gnu.org/git/guix.git, which might be stale Building from this channel: guix https://github.com/guix-mirror/guix 3e51f9e …

So far we talked about mechanics in a rather abstract way. That might satisfy the graph theorist or the Git geek in you, but if you’re up for a quick tour of the implementation, the next section is for you!

A long process

We’re kinda celebrating these days, but the initial bug report was opened… in 2016. One of the reasons was that we were hoping the general problem was solved already and we’d “just” have to adapt what others had done. As for the actual design: you would think it can be implemented in ten lines of shell script invoking gpgv and git. Perhaps that’s a possibility, but the resulting performance would be problematic—keep in mind that users may routinely have to authenticate hundreds of commits. So we took a long road, but the end result is worth it. Let’s recap.

Back in April 2016, committers started signing commits, with a server-side hook prohibiting unsigned commits. In July 2016, we had proof-of-concept libgit2 bindings with the primitives needed to verify signatures on commits, passing them to gpgv; later Guile-Git was born, providing good coverage of the libgit2 interface. Then there was a two-year hiatus during which no code was produced in that area.

Everything went faster starting from December 2019. Progress was incremental and may have been hard to follow, even for die-hard Guix hackers, so here are the major milestones:

Whether you’re a channel author or a user, the feature is now fully documented in the manual, and we’d love to get your feedback!

SHA-1

We can’t really discuss Git commit signing without mentioning SHA-1. The venerable crytographic hash function is approaching end of life, as evidenced by recent breakthroughs. Signing a Git commit boils down to signing a SHA-1 hash, because all objects in the Git store are identified by their SHA-1 hash.

Git now relies on a collision attack detection library to mitigate practical attacks. Furthermore, the Git project is planning a hash function transition to address the problem.

Some projects such as Bitcoin Core choose to not rely on SHA-1 at all. Instead, for the commits they sign, they include in the commit log the SHA512 hash of the tree, which the verification scripts check.

Computing a tree hash for each commit in Guix would probably be prohibitively costly. For now, for lack of a better solution, we rely on Git’s collision attack detection and look forward to a hash function transition.

As for SHA-1 in an OpenPGP context: our authentication code rejects SHA-1 OpenPGP signatures, as recommended.

Related work

A lot of work has gone into securing the software supply chain, often in the context of binary distros, sometimes in a more general context; more recent work also looks into Git authentication and related issues. This section attempts to summarize how Guix relates to similar work that we’re aware of in these two areas. More detailed discussions can be found in the issue tracker.

The Update Framework (TUF) is a reference for secure update systems, with a well-structured spec and a number of implementations. TUF is a great source of inspiration to think about this problem space. Many of its goals are shared by Guix. Not all the attacks it aims to protect against (Section 1.5.2 of the spec) are addressed by what’s presented in this post: indefinite freeze attacks, where updates never become available, are not addressed per se (though easily observable), and slow retrieval attacks aren’t addressed either. The notion of role is also something currently missing from the Guix authentication model, where any authorized committer can touch any files, though the model and .guix-authorizations format leave room for such an extension.

However, both in its goals and system descriptions, TUF is biased towards systems that distribute binaries as plain files with associated meta-data. That creates a fundamental impedance mismatch. As an example, attacks such as fast-forward attacks or mix-and-match attacks don’t apply in the context of Guix; likewise, the repository depicted in Section 3 of the spec has little in common with a Git repository.

Developers of OPAM, the OCaml package manager, adapted TUF for use with their Git-based package repository, later updated to write Conex, a separate tool to authenticate OPAM repositories. OPAM is interesting because like Guix it’s a source distro and its package repository is a Git repository containing “build recipe”. To date, it appears that opam update itself does not authenticate repositories though; it’s up to users or developer to run Conex.

Another very insightful piece of work is the 2016 paper On omitting commits and committing omissions. The paper focuses on the impact of malicious modifications to Git repository meta-data. An attacker with access to the repository can modify, for instance, branch references, to cause a rollback attack or a “teleport” attack, causing users to pull an older commit or an unrelated commit. As written above, guix pull would detect such attacks. However, guix pull would fail to detect cases where metadata modification does not yield a rollback or teleport, yet gives users a different view than the intended one—for instance, a user is directed to an authentic but different branch rather than the intended one. The “secure push” operation and the associated reference state log (RSL) the authors propose would be an improvement.

Wrap-up and outlook

Guix now has a mechanism that allows it to authenticate updates. If you’ve run guix pull recently, perhaps you’ve noticed additional output and a progress bar as new commits are being authenticated. Apart from that, the switch has been completely transparent. The authentication mechanism is built around the commit graph of Git; in fact, it’s a mechanism to authenticate Git checkouts and in that sense it is not tied to Guix and its application domain. It is available not only for the main guix channel, but also for third-party channels.

To bootstrap trust, we added the notion of channel introductions. These are now visible in the user interface, in particular in the output of guix describe and in the configuration file of guix pull and guix time-machine. While channel configuration remains a few lines of code that users typically paste, this extra bit of configuration might be intimidating. It certainly gives an incentive to provide a command-line interface to manage the user’s list of channels: guix channel add, etc.

The solution here is built around the assumption that Guix is fundamentally a source-based distribution, and is thus completely orthogonal to the public key infrastructure (PKI) Guix uses for the signature of substitutes. Yet, the substitute PKI could probably benefit from the fact that we now have a secure update mechanism for the Guix source code: since guix pull can securely retrieve a new substitute signing key, perhaps it could somehow handle substitute signing key revocation and delegation automatically? Related to that, channels could perhaps advertise a substitute URL and its signing key, possibly allowing users to register those when they first pull from the channel. All this requires more thought, but it looks like there are new opportunities here.

Until then, if you’re a user or a channel author, we’d love to hear from you! We’ve already gotten feedback that these new mechanisms broke someone’s workflow; hopefully it didn’t break yours, but either way your input is important in improving the system. If you’re into security and think this design is terrible or awesome, please do provide feedback.

It’s a long and article describing a long ride on a path we discovered as we went, and it felt like an important milestone to share!

Acknowledgments

Thanks to everyone who provided feedback, ideas, or carried out code review during this long process, notably (in no particular order): Christopher Lemmer Webber, Leo Famulari, David Thompson, Mike Gerwitz, Ricardo Wurmus, Werner Koch, Justus Winter, Vagrant Cascadian, Maxim Cournoyer, Simon Tournier, John Soo, and Jakub Kądziołka. Thanks also to janneke, Ricardo, Marius, and Simon for reviewing an earlier draft of this post.

About GNU Guix

GNU Guix is a transactional package manager and an advanced distribution of the GNU system that respects user freedom. Guix can be used on top of any system running the kernel Linux, or it can be used as a standalone operating system distribution for i686, x86_64, ARMv7, and AArch64 machines.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. When used as a standalone GNU/Linux distribution, Guix offers a declarative, stateless approach to operating system configuration management. Guix is highly customizable and hackable through Guile programming interfaces and extensions to the Scheme language.

Categories: FLOSS Project Planets

Drudesk: Upgrade your website from Drupal 7 to 8/9 with the Drupal Module Upgrader

Planet Drupal - Wed, 2020-07-01 12:41

The idea to upgrade a website from Drupal 7 to Drupal 8/9 is getting new perspectives.

Now that Drupal 9 is there as an official release, it’s clear like never before that Drupal 7 is getting outdated. It’s time to upgrade Drupal 7 sites so they can give their owners a much better value. The Drudesk support team knows how to achieve this through upgrades and updates, as well as speed optimization, bug fixes, redesign, and so on.

Categories: FLOSS Project Planets

Pages