FLOSS Project Planets

Hook 42: Dad Jokes, Development, and Drupal in Denver

Planet Drupal - Thu, 2019-08-15 14:47
Dad Jokes, Development, and Drupal in Denver Aimee Degnan Thu, 08/15/2019 - 18:47
Categories: FLOSS Project Planets

Python Insider: Inspect PyPI event logs to audit your account's and project's security

Planet Python - Thu, 2019-08-15 13:45
To help you check for security problems, PyPI is adding an advanced audit log of user actions beyond the current (existing) journal. This will, for instance, allow publishers to track all actions taken by third party services on their behalf.

This beta feature is live now on PyPI and on Test PyPI.

Background:
We're further increasing the security of the Python Package Index with another new beta feature: an audit log of sensitive actions that affect users and projects. This is thanks to a grant from the Open Technology Fund, coordinated by the Packaging Working Group of the Python Software Foundation.

Details:
Project security history display, listing
events (such as "file removed from release version 1.0.1")
with user, date/time, and IP address for each event. We're adding a display so you can look at things that have happened in your user account or project, and check for signs someone's stolen your credentials.

In your account settings, you can view a log of sensitive actions from the last two weeks that are relevant to your user account, and if you are an Owner at least one project on PyPI, you can go to that project's Manage Project page to view a log of sensitive actions (performed by any user) relevant to that project. (And PyPI site administrators are able to view the full audit log for all users and all projects.)

Please help us test this, and report issues.

User security history display, listing
events (such as "API token added")
with additional details (such as token scope), date/time,
and IP address for each event. In beta:
We're still refining this and may fail to log, or to properly display, events in the audit log. And the sensitive event logging and display starting on 16 August 2019, so you won't see sensitive events from before that date. (Read more technical details about implementation in the GitHub issue.)

Next:
We're continuing to refine all our beta features, while working on accessibility improvements and starting to work on localization on PyPI. Follow our progress reports in more detail on Discourse.
Categories: FLOSS Project Planets

Community Working Group posts: Announcing Drupal Event Code of Conduct Training

Planet Drupal - Thu, 2019-08-15 13:25

The Drupal Community Working Group is happy to announce that we've teamed up with Otter Tech to offer live, monthly, online Code of Conduct enforcement training for Drupal Event organizers and volunteers through the end of 2019. 

The training is designed to provide "first responder" skills to Drupal community members who take reports of potential Code of Conduct issues at Drupal events, including meetups, camps, conventions, and other gatherings. The workshops will be attended by Code of Conduct enforcement teams from other open source events, which will allow cross-pollination of knowledge with the Drupal community.

Each monthly online workshop is the same; community members only have to attend one monthly workshop of their choice to complete the training.  We strongly encourage all Drupal event organizers to consider sponsoring one or two persons' attendance at this workshop.

The monthly online workshops will be presented by Sage Sharp, Otter Tech's CEO and a diversity and inclusion leader in the open source community. From the official description of the workshop, it will include:

  • Practice taking a report of a potential Code of Conduct violation (an incident report)
  • Practice following up with the reported person
  • Instructor modeling on how to take a report and follow up on a report
  • One practice scenario for a report given at an event
  • One practice scenario for a report given in an online community
  • Discussion on bias, microaggressions, personal conflicts, and false reporting
  • Frameworks for evaluating a response to a report
  • 40 minutes total of Q&A time

In addition, we have received a Drupal Community Cultivation Grant to help defray the cost of the workshop for those that need assistance. The standard cost of the workshop is $350, Otter Tech has worked with us to allow us to provide the workshop for $300. To register for the workshop, first let us know that you're interested by completing this sign-up form - everyone who completes the form will receive a coupon code for $50 off the regular price of the workshop.

For those that require additional assistance, we have a limited number of $100 subsidies available, bringing the workshop price down to $200. Subsidies will be provided based on reported need as well as our goal to make this training opportunity available to all corners of our community. To apply for the subsidy, complete the relevant section on the sign-up form. The deadline for applying for the subsidy is end-of-business on Friday, September 6, 2019 - those selected for the subsidy will be notified after this date (in time for the September 9, 2019 workshop).

The workshops will be held on:

  • September 9 (Monday) at 3 pm to 7 pm U.S. Pacific Time / 8 am to 12 pm Australia Eastern Time
  • October 23 (Wednesday) at 5 am to 9 am U.S. Pacific Time / 2 pm to 6 pm Central European Time
  • November 14 (Thursday) at 6 pm to 10 pm U.S. Pacific Time / 1 pm to 5 pm Australia Eastern Time
  • December 4 (Wednesday) at 9 am to 1 pm U.S. Pacific Time / 6 pm to 10 pm Central European Time

Those that successfully complete the training will be (at their discretion) listed on Drupal.org (in the Drupal Community Workgroup section) as a means to prove that they have completed the training. We feel that moving forward, the Drupal community now has the opportunity to have professionally trained Code of Conduct contacts at the vast majority of our events, once again, leading the way in the open source community.

We are fully aware that the fact that the workshops will be presented in English limit who will be able to attend. We are more than interested in finding additional professional Code of Conduct workshops in other languages. Please contact us if you can assist.
 

Categories: FLOSS Project Planets

InternetDevels: Great examples of high tech company websites built on Drupal

Planet Drupal - Thu, 2019-08-15 11:42

Drupal is moving to the future and adopts more and more innovative trends. No wonder that high tech engineering leaders trust Drupal and build their sites with it.

Drupal in high-tech: innovative companies + innovative CMS

They have found each other! Thinking about Drupal’s innovative spirit, we want to mention plenty of its capabilities, so here are at least a few:

Read more
Categories: FLOSS Project Planets

KTouch in KDE Apps 19.08.0

Planet KDE - Thu, 2019-08-15 10:34

KTouch, an application to learn and practice touch typing, has received a considerable update with today's release of KDE Apps 19.8.0. It includes a complete redesign by me for the home screen, which is responsible to select the lesson to train on.

New home screen of KTouch

There is now a new sidebar offering all the courses KTouch has for a total of 34 different keyboard layouts. In previous versions, KTouch presented only the courses matching the current keyboard layout. Now it is much more obvious how to train on different keyboard layouts than the current one.

Other improvements in this release include:

  • Tab focus works now as expected throughout the application and allows training without touching the mouse ever.
  • Access to training statistics for individual lessons from the home screen has been added.
  • KTouch supports now rendering on HiDPI screens.

KTouch 19.08.0 is available on the Snap Store and is coming to your Linux distribution.

Categories: FLOSS Project Planets

Eelke Blok: Fighting content spam on Drupal with Akismet

Planet Drupal - Thu, 2019-08-15 10:15

Once, the Drupal community had Mollom, and everything was good. It was a web service that would let you use an API to scan comments and other user-submitted content and it would let your site know whether it thought it was spam, or not, so it could safely publish the content. Or not. It was created by our very own Dries Buytaert and obviously had a Drupal module. It was the service of choice for Drupal sites struggling with comment spam. Unfortunately, Mollom no longer exists. But there is an alternative, from the WordPress world: Akismet.

Categories: FLOSS Project Planets

Julian Andres Klode: APT Patterns

Planet Debian - Thu, 2019-08-15 09:55

If you have ever used aptitude a bit more extensively on the command-line, you’ll probably have come across its patterns. This week I spent some time implementing (some) patterns for apt, so you do not need aptitude for that, and I want to let you in on the details of this merge request !74.

so, what are patterns?

Patterns allow you to specify complex search queries to select the packages you want to install/show. For example, the pattern ?garbage can be used to find all packages that have been automatically installed but are no longer depended upon by manually installed packages. Or the pattern ?automatic allows you find all automatically installed packages.

You can combine patterns into more complex ones; for example, ?and(?automatic,?obsolete) matches all automatically installed packages that do not exist any longer in a repository.

There are also explicit targets, so you can perform queries like ?for x: ?depends(?recommends(x)): Find all packages x that depend on another package that recommends x. I do not fully comprehend those yet - I did not manage to create a pattern that matches all manually installed packages that a meta-package depends upon. I am not sure it is possible.

reducing pattern syntax

aptitude’s syntax for patterns is quite context-sensitive. If you have a pattern ?foo(?bar) it can have two possible meanings:

  1. If ?foo takes arguments (like ?depends did), then ?bar is the argument.
  2. Otherwise, ?foo(?bar) is equivalent to ?foo?bar which is short for ?and(?foo,?bar)

I find that very confusing. So, when looking at implementing patterns in APT, I went for a different approach. I first parse the pattern into a generic parse tree, without knowing anything about the semantics, and then I convert the parse tree into a APT::CacheFilter::Matcher, an object that can match against packages.

This is useful, because the syntactic structure of the pattern can be seen, without having to know which patterns have arguments and which do not - basically, for the parser ?foo and ?foo() are the same thing. That said, the second pass knows whether a pattern accepts arguments or not and insists on you adding them if required and not having them if it does not accept any, to prevent you from confusing yourself.

aptitude also supports shortcuts. For example, you could write ~c instead of config-files, or ~m for automatic; then combine them like ~m~c instead of using ?and. I have not implemented these short patterns for now, focusing instead on getting the basic functionality working.

So in our example ?foo(?bar) above, we can immediately dismiss parsing that as ?foo?bar:

  1. we do not support concatenation instead of ?and.
  2. we automatically parse ( as the argument list, no matter whether ?foo supports arguments or not

apt not understanding invalid patterns

Supported syntax

At the moment, APT supports two kinds of patterns: Basic logic ones like ?and, and patterns that apply to an entire package as opposed to a specific version. This was done as a starting point for the merge, patterns for versions will come in the next round.

We also do not have any support for explicit search targets such as ?for x: ... yet - as explained, I do not yet fully understand them, and hence do not want to commit on them.

The full list of the first round of patterns is below, helpfully converted from the apt-patterns(7) docbook to markdown by pandoc.

logic patterns

These patterns provide the basic means to combine other patterns into more complex expressions, as well as ?true and ?false patterns.

?and(PATTERN, PATTERN, ...)

Selects objects where all specified patterns match.

?false

Selects nothing.

?not(PATTERN)

Selects objects where PATTERN does not match.

?or(PATTERN, PATTERN, ...)

Selects objects where at least one of the specified patterns match.

?true

Selects all objects.

package patterns

These patterns select specific packages.

?architecture(WILDCARD)

Selects packages matching the specified architecture, which may contain wildcards using any.

?automatic

Selects packages that were installed automatically.

?broken

Selects packages that have broken dependencies.

?config-files

Selects packages that are not fully installed, but have solely residual configuration files left.

?essential

Selects packages that have Essential: yes set in their control file.

?exact-name(NAME)

Selects packages with the exact specified name.

?garbage

Selects packages that can be removed automatically.

?installed

Selects packages that are currently installed.

?name(REGEX)

Selects packages where the name matches the given regular expression.

?obsolete

Selects packages that no longer exist in repositories.

?upgradable

Selects packages that can be upgraded (have a newer candidate).

?virtual

Selects all virtual packages; that is packages without a version. These exist when they are referenced somewhere in the archive, for example because something depends on that name.

examples
apt remove ?garbage

Remove all packages that are automatically installed and no longer needed - same as apt autoremove

apt purge ?config-files

Purge all packages that only have configuration files left

oddities

Some things are not yet where I want them:

  • ?architecture does not support all, native, or same
  • ?installed should match only the installed version of the package, not the entire package (that is what aptitude does, and it’s a bit surprising that ?installed implies a version and ?upgradable does not)
the future

Of course, I do want to add support for the missing version patterns and explicit search patterns. I might even add support for some of the short patterns, but no promises. Some of those explicit search patterns might have slightly different syntax, e.g. ?for(x, y) instead of ?for x: y in order to make the language more uniform and easier to parse.

Another thing I want to do ASAP is to disable fallback to regular expressions when specifying package names on the command-line: apt install g++ should always look for a package called g++, and not for any package containing g (g++ being a valid regex) when there is no g++ package. I think continuing to allow regular expressions if they start with ^ or end with $ is fine - that prevents any overlap with package names, and would avoid breaking most stuff.

There also is the fallback to fnmatch(): Currently, if apt cannot find a package with the specified name using the exact name or the regex, it would fall back to interpreting the argument as a glob(7) pattern. For example, apt install apt* would fallback to installing every package starting with apt if there is no package matching that as a regular expression. We can actually keep those in place, as the glob(7) syntax does not overlap with valid package names.

Maybe I should allow using [] instead of () so larger patterns become more readable, and/or some support for comments.

There are also plans for AppStream based patterns. This would allow you to use apt install ?provides-mimetype(text/xml) or apt install ?provides-lib(libfoo.so.2). It’s not entirely clear how to package this though, we probably don’t want to have libapt-pkg depend directly on libappstream.

feedback

Talk to me on IRC, comment on the Mastodon thread, or send me an email if there’s anything you think I’m missing or should be looking at.

Categories: FLOSS Project Planets

Continuum Analytics Blog: 4 Machine Learning Use Cases in the Automotive Sector

Planet Python - Thu, 2019-08-15 09:54

From parts suppliers to vehicle manufacturers, service providers to rental car companies, the automotive and related mobility industries stand to gain significantly from implementing machine learning at scale. We see the big automakers investing in…

The post 4 Machine Learning Use Cases in the Automotive Sector appeared first on Anaconda.

Categories: FLOSS Project Planets

Kushal Das: git checkout to previous branch

Planet Python - Thu, 2019-08-15 07:26

We regularly move between git branches while working on projects. I always used to type in the full branch name, say to go back to develop branch and then come back to the feature branch. This generally takes a lot of typing (for the branch names etc.). I found out that we can use - like in the way we use cd - to go back to the previous directory we were in.

git checkout -

Here is a small video for demonstration.

I hope this will be useful for some people.

Categories: FLOSS Project Planets

OSTraining: How to Group Entity Fields in Tabs in Drupal 8

Planet Drupal - Thu, 2019-08-15 01:46

Extensive nodes (or other types of entities) with many text fields, such as biographies, often remain unread because of the huge (and discouraging) amount of text.

The Drupal 8 "Field Group" module allows you to group fields and to present them in containers like vertical or horizontal tabs, accordions or just plain wrappers. It lets you group fields in the frontend of your site, and in the backend as well.

Keep reading to learn how to use this module!

Categories: FLOSS Project Planets

OSTraining: How to Group Entity Fields in Tabs in Drupal 8

Planet Drupal - Thu, 2019-08-15 01:46

Extensive nodes (or other types of entities) with many text fields, such as biographies, often remain unread because of the huge (and discouraging) amount of text.

The Drupal 8 "Field Group" module allows you to group fields and to present them in containers like vertical or horizontal tabs, accordions or just plain

wrappers. Field Group lets you group fields in the Frontend of your site, and in the Backend as well.

Keep reading to learn how to use this module!

Categories: FLOSS Project Planets

Malthe Borch: Using built-in transparent compression on MacOS

Planet Python - Wed, 2019-08-14 17:41

Ever since DriveSpace on MS-DOS (or really, Stacker), we've had transparent file compression, with varying degrees of automation; in fact, while the DriveSpace-compression on MS-DOS was a fully automated affair, the built-in transparent compression in newer filesystems such as ZFS, Btrfs, APFS (and even HFS+), is engaged manually on a per-file or folder basis.

But no one's using it!

On my system, compressing /Applications saved 18GB (38.7%).

MacOS doesn't actually come with a utility to do this even though the core functionality is included, so you'll need to install an open source tool in order to use it.

$ brew install afsctool

To compress a file or folder, use the -c flag like so:

$ afsctool -c /Applications

(You might need to use root for some application and/or system files).

Categories: FLOSS Project Planets

Continuum Analytics Blog: Accessing Remote Data with a Generalized File System

Planet Python - Wed, 2019-08-14 15:31

Originally created for the needs of Dask, we have spun out a general file system implementation and specification, to provide all users with simple access to many local, cluster, and remote storage media. Dask and Intake…

The post Accessing Remote Data with a Generalized File System appeared first on Anaconda.

Categories: FLOSS Project Planets

Agaric Collective: Migrating addresses into Drupal

Planet Drupal - Wed, 2019-08-14 14:55

Today we will learn how to migrate addresses into Drupal. We are going to use the field provided by the Address module which depends on the third-party library commerceguys/addressing. When migrating addresses you need to be careful with the data that Drupal expects. The address components can change per country. The way to store those components also varies per country. These and other important consideration will be explained. Let’s get started.

Getting the code

You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD address whose machine name is ud_migrations_address. The migration to execute is udm_address. Notice that this migration writes to a content type called UD Address and one field: field_ud_address. This content type and field will be created when the module is installed. They will also be removed when the module is uninstalled. The demo module itself depends on the following modules: address and migrate.

Note: Configuration placed in a module’s config/install directory will be copied to Drupal’s active configuration. And if those files have a dependencies/enforced/module key, the configuration will be removed when the listed modules are uninstalled. That is how the content type and fields are automatically created and deleted.

The recommended way to install the Address module is using composer: composer require drupal/address. This will grab the Drupal module and the commerceguys/addressing library that it depends on. If your Drupal site is not composer-based, an alternative is to use the Ludwig module. Read this article if you want to learn more about this option. In the example, it is assumed that the module and its dependency were obtained via composer. Also, keep an eye on the Composer Support in Core Initiative as they make progress.

Source and destination sections

The example will migrate three addresses from the following countries: Nicaragua, Germany, and the United States of America (USA). This makes it possible to show how different countries expect different address data. As usual, for any migration you need to understand the source. The following code snippet shows how the source and destination sections are configured:

source: plugin: embedded_data data_rows: - unique_id: 1 first_name: 'Michele' last_name: 'Metts' company: 'Agaric LLC' city: 'Boston' state: 'MA' zip: '02111' country: 'US' - unique_id: 2 first_name: 'Stefan' last_name: 'Freudenberg' company: 'Agaric GmbH' city: 'Hamburg' state: '' zip: '21073' country: 'DE' - unique_id: 3 first_name: 'Benjamin' last_name: 'Melançon' company: 'Agaric SA' city: 'Managua' state: 'Managua' zip: '' country: 'NI' ids: unique_id: type: integer destination: plugin: 'entity:node' default_bundle: ud_address

Note that not every address component is set for all addresses. For example, the Nicaraguan address does not contain a ZIP code. And the German address does not contain a state. Also, the Nicaraguan state is fully spelled out: Managua. On the contrary, the USA state is a two letter abbreviation: MA for Massachusetts. One more thing that might not be apparent is that the USA ZIP code belongs to the state of Massachusetts. All of this is important because the module does validation of addresses. The destination is the custom ud_address content type created by the module.

Available subfields

The Address field has 13 subfields available. They can be found in the schema() method of the AddresItem class. Fields are not required to have a one-to-one mapping between their schema and the form widgets used for entering content. This is particularly true for addresses because input elements, labels, and validations change dynamically based on the selected country. The following is a reference list of all subfields for addresses:

  1. langcode for language code.
  2. country_code for country.
  3. administrative_area for administrative area (e.g., state or province).
  4. locality for locality (e.g. city).
  5. dependent_locality for dependent locality (e.g. neighbourhood).
  6. postal_code for postal or ZIP code.
  7. sorting_code for sorting code.
  8. address_line1 for address line 1.
  9. address_line2 for address line 2.
  10. organization for company.
  11. given_name for first name.
  12. additional_name for middle name.
  13. family_name for last name:

Properly describing an address is not trivial. For example, there are discussions to add a third address line component. Check this issue if you need this functionality or would like to participate in the discussion.

Address subfield mappings

In the example, only 9 out of the 13 subfields will be mapped. The following code snippet shows how to do the processing of the address field:

field_ud_address/given_name: first_name field_ud_address/family_name: last_name field_ud_address/organization: company field_ud_address/address_line1: plugin: default_value default_value: 'It is a secret ;)' field_ud_address/address_line2: plugin: default_value default_value: 'Do not tell anyone :)' field_ud_address/locality: city field_ud_address/administrative_area: state field_ud_address/postal_code: zip field_ud_address/country_code: country

The mapping is relatively simple. You specify a value for each subfield. The tricky part is to know the name of the subfield and the value to store in it. The format for an address component can change among countries. The easiest way to see what components are expected for each country is to create a node for a content type that has an address field. With this example, you can go to /node/add/ud_address and try it yourself. For simplicity sake, let’s consider only 3 countries:

  • For USA, city, state, and ZIP code are all required. And for state, you have a specific list form which you need to select from.
  • For Germany, the company is moved above first and last name. The ZIP code label changes to Postal code and it is required. The city is also required. It is not possible to set a state.
  • For Nicaragua, the Postal code is optional. The State label changes to Department. It is required and offers a predefined list to choose from. The city is also required.

Pay very close attention. The available subfields will depend on the country. Also, the form labels change per country or language settings. They do not necessarily match the subfield names. Moreover, the values that you see on the screen might not match what is stored in the database. For example, a Nicaraguan address will store the full department name like Managua. On the other hand, a USA address will only store a two-letter code for the state like MA for Massachusetts.

Something else that is not apparent even from the user interface is data validation. For example, let’s consider that you have a USA address and select Massachusetts as the state. Entering the ZIP code 55111 will produce the following error: Zip code field is not in the right format. At first glance, the format is correct, a five-digits code. The real problem is that the Address module is validating if that ZIP code is valid for the selected state. It is not valid for Massachusetts. 55111 is a ZIP code for the state of Minnesota which makes the validation fail. Unfortunately, the error message does not indicate that. Nine-digits ZIP codes are accepted as long as they belong to the state that is selected.

Finding expected values

Values for the same subfield can vary per country. How can you find out which value to use? There are a few ways, but they all require varying levels of technical knowledge or access to resources:

  • You can inspect the source code of the address field widget. When the country and state components are rendered as select input fields (dropdowns), you can have a look at the value attribute for the option that you want to select. This will contain the two-letter code for countries, the two-letter abbreviations for USA states, and the fully spelled string for Nicaraguan departments.
  • You can use the Devel module. Create a node containing an address. Then use the devel tab of the node to inspect how the values are stored. It is not recommended to have the devel module in a production site. In fact, do not deploy the code even if the module is not enabled. This approach should only be used in a local development environment. Make sure no module or configuration is committed to the repo nor deployed.
  • You can inspect the database. Look for the records in a table named node__field_[field_machine_name], if migrating nodes. First create some example nodes via the user interface and then query the table. You will see how Drupal stores the values in the database.

If you know a better way, please share it in the comments.

The commerceguys addressing library

With version 8 came many changes in the way Drupal is developed. Now there is an intentional effort to integrate with the greater PHP ecosystem. This involves using already existing libraries and frameworks, like Symfony. But also, making code written for Drupal available as external libraries that could be used by other projects. commerceguys\addressing is one example of a library that was made available as an external library. That being said, the Address module also makes use of it.

Explaining how the library works or where its fetches its database is beyond the scope of this article. Refer to the library documentation for more details on the topic. We are only going to point out some things that are relevant for the migration. For example, the ZIP code validation happens at the validatePostalCode() method of the AddressFormatConstraintValidator class. There is no need to know this for a migration project. But the key thing to remember is that the migration can be affected by third-party libraries outside of Drupal core or contributed modules. Another example, is the value for the state subfield. Address module expects a subdivision as listed in one of the files in the resources/subdivision directory.

Does the validation really affect the migration? We have already mentioned that the Migrate API bypasses Form API validations. And that is true for address fields as well. You can migrate a USA address with state Florida and ZIP code 55111. Both are invalid because you need to use the two-letter state code FL and use a valid ZIP code within the state. Notwithstanding, the migration will not fail in this case. In fact, if you visit the migrated node you will see that Drupal happily shows the address with the data that you entered. The problems arrives when you need to use the address. If you try to edit the node you will see that the state will not be preselected. And if you try to save the node after selecting Florida you will get the validation error for the ZIP code.

This validation issues might be hard to track because no error will be thrown by the migration. The recommendation is to migrate a sample combination of countries and address components. Then, manually check if editing a node shows the migrated data for all the subfields. Also check that the address passes Form API validations upon saving. This manual testing can save you a lot of time and money down the road. After all, if you have an ecommerce site, you do not want to be shipping your products to wrong or invalid addresses. ;-)

Technical note: The commerceguys/addressing library actually follows ISO standards. Particularly, ISO 3166 for country and state codes. It also uses CLDR and Google's address data. The dataset is stored as part of the library’s code in JSON format.

Migrating countries and zone fields

The Address module offer two more fields types: Country and Zone. Both have only one subfield value which is selected by default. For country, you store the two-letter country code. For zone, you store a serialized version of a Zone object.

What did you learn in today’s blog post? Have you migrated address before? Did you know the full list of subcomponents available? Did you know that data expectations change per country? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors. Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.

Read more and discuss at agaric.coop.

Categories: FLOSS Project Planets

TechBeamers Python: Append Vs. Extend in Python List

Planet Python - Wed, 2019-08-14 14:51

In this tutorial, you’ll explore the difference between append and extend methods of Python List. Both these methods are used to manipulate the lists in their specific way. The append method adds a single or a group of items (sequence) as one element at the tail of a list. On the other hand, the extend method appends the input elements to the end as part of the original list. After reading the above description about append() and extend(), it may seem a bit confusing to you. So, we’ll explain each of these methods with examples and show the difference between

The post Append Vs. Extend in Python List appeared first on Learn Programming and Software Testing.

Categories: FLOSS Project Planets

Real Python: An Effective Python Environment: Making Yourself at Home

Planet Python - Wed, 2019-08-14 10:00

When you’re first learning a new programming language, a lot of your time and effort go into understanding the syntax, code style, and built-in tooling. This is just as true for Python as it is for any other language. Once you gain enough familiarity to be comfortable with the ins and outs of Python, you can start to invest time into building a Python environment that will foster your productivity.

Your shell is more than a prebuilt program provided to you as-is. It’s a framework on which you can build an ecosystem. This ecosystem will come to fit your needs so that you can spend less time fiddling and more time thinking about the next big project you’re working on.

Although no two developers have the same setup, there are a number of choices everyone faces when cultivating their Python environment. It’s important to understand each of these decisions and the options available to you!

By the end of this article, you’ll be able to answer questions like:

  • What shell should I use? What terminal should I use?
  • What version(s) of Python can I use?
  • How do I manage dependencies for different projects?
  • How can I make my tools do some of the work for me?

Once you’ve answered these questions for yourself, you can embark on the journey of creating a Python environment to call your very own. Let’s get started!

Free Bonus: Click here to get access to a free 5-day class that shows you how to avoid common dependency management issues with tools like Pip, PyPI, Virtualenv, and requirements files.

Shells

When you use a command-line interface (CLI), you execute commands and see their output. A shell is a program that provides this (usually text-based) interface to you. Shells often provide their own programming language that you can use to manipulate files, install software, and so on.

There are more unique shells than could be reasonably listed here, so you’ll see a few prominent ones. Others differ in syntax or enhanced features, but they generally provide the same core functionality.

Unix Shells

Unix is a family of operating systems first developed in the early days of computing. Unix’s popularity has lasted through today, heavily inspiring Linux and macOS. The first shells were developed for use with Unix and Unix-like operating systems.

Bourne Shell (sh)

The Bourne shell—developed by Stephen Bourne for Bell Labs in 1979—was one of the first to incorporate the idea of environment variables, conditionals, and loops. It has provided a strong basis for many other shells in use today and is still available on most systems at /bin/sh.

Bourne-Again Shell (bash)

Built on the success of the original Bourne shell, bash introduced improved user-interaction features. With bash, you get Tab completion, history, and wildcard searching for commands and paths. The bash programming language provides more data types, like arrays.

Z Shell (zsh)

zsh combines many of the best features from other shells along with a few of its own tricks into one experience. zsh offers autocorrection of misspelled commands, shorthand for manipulating multiple files, and advanced options for customizing your command prompt.

zsh also provides a framework for deep customization. The Oh My Zsh project supplies a rich set of themes and plugins, and is often used hand in hand with zsh.

macOS will ship with zsh as its default shell starting with Catalina, speaking to the shell’s popularity. Consider acquainting yourself with zsh now so that you’ll be comfortable with it going forward.

Xonsh

If you’re feeling particularly adventurous, you can give Xonsh a try. Xonsh is a shell that combines some features of other Unix-like shells with the power of Python syntax. You can use the language you already know to accomplish tasks on your filesystem and so on.

Although Xonsh is powerful, it lacks the compatibility other shells tend to share. You might not be able to run many existing shell scripts in Xonsh as a result. If you find that you like Xonsh, but compatibility is a concern, then you can use Xonsh as a supplement to your activities in a more widely used shell.

Windows Shells

Similarly to Unix-like operating systems, Windows also offers a number of options when it comes to shells. The shells offered in Windows vary in features and syntax, so you may need to try several to find one you like best.

CMD (cmd.exe)

CMD (short for “command”) is the default CLI shell for Windows. It’s the predecessor to COMMAND.COM, the shell built for DOS (disk operating system).

Because DOS and Unix evolved independently, the commands and syntax in CMD are markedly different from shells built for Unix-like systems. However, CMD still provides the same core functionality for browsing and manipulating files, running commands, and viewing output.

PowerShell

PowerShell was released in 2006 and also ships with Windows. It provides Unix-like aliases for most commands, so if you’re coming to Windows from macOS or Linux or have to use both, then PowerShell might be great for you.

PowerShell is vastly more powerful than CMD. With PowerShell you can:

  • Pipe the output of one command to the input of another
  • Automate tasks through the exposed Windows management features
  • Use a scripting language to accomplish complex tasks
Windows Subsystem for Linux

Microsoft has released a Windows subsystem for Linux (WSL) for running Linux directly on Windows. If you install WSL, then you can use zsh, bash, or any other Unix-like shell. If you want strong compatibility across your Windows and macOS or Linux environments, then be sure to give WSL a try. You may also consider dual-booting Linux and Windows as an alternative.

See this comparison of command shells for exhaustive coverage.

Terminal Emulators

Early developers used terminals to interact with a central mainframe computer. These were devices with a keyboard and a screen or printer that would display computed output.

Today, computers are portable and don’t require separate devices to interact with them, but the terminology still remains. Whereas a shell provides the prompt and interpreter you use to interface with text-based CLI tools, a terminal emulator (often shortened to terminal) is the graphical application you run to access the shell.

Almost any terminal you encounter should support the same basic features:

  • Text colors for syntax highlighting in your code or distinguishing meaningful text in command output
  • Scrolling for viewing an earlier command or its output
  • Copy/paste for transferring text in or out of the shell from other programs
  • Tabs for running multiple programs at once or separating your work into different sessions
macOS Terminals

The terminal options available for macOS are all full-featured, differing mostly in aesthetics and specific integrations with other tools.

Terminal

If you’re using a Mac, then you may have used the built-in Terminal app before. Terminal supports all the usual functionality, and you can also customize the color scheme and a few hotkeys. It’s a nice enough tool if you don’t need many bells and whistles. You can find the Terminal app in Applications → Utilities → Terminal on macOS.

iTerm2

I’ve been a long-time user of iTerm2. It takes the developer experience on Mac a step further, offering a much wider palette of customization and productivity options that enable you to:

  • Integrate with the shell to jump quickly to previously entered commands
  • Create custom search term highlighting in the output from commands
  • Open URLs and files displayed in the terminal with Cmd+click

A Python API ships with the latest versions of iTerm2, so you can even improve your Python chops by developing more intricate customizations!

iTerm2 is popular enough to enjoy first-class integration with several other tools, and has a healthy community building plugins and so on. It’s a good choice because of its more frequent release cycle compared to Terminal, which only updates as often as macOS does.

Hyper

A relative newcomer, Hyper is a terminal built on Electron, a framework for building desktop applications using web technologies. Electron apps are heavily customizable because they’re “just JavaScript” under the hood. You can create any functionality that you can write the JavaScript for.

On the other hand, JavaScript is a high-level programming language and won’t always perform as well as low-level languages like Objective-C or Swift. Be mindful of the plugins you install or create!

Windows Terminals

As with the shell options, Windows terminal options vary widely in utility. Some are tightly bound to a particular shell as well.

Command Prompt

Command Prompt is the graphical application you can use to work with CMD in Windows. Like CMD, it’s a bare-bones tool for getting a few small things done. Although Command Prompt and CMD provide fewer features than other alternatives, you can be confident that they’ll be available on nearly every Windows installation and in a consistent place.

Cygwin

Cygwin is a third-party suite of tools for Windows that provides a Unix-like wrapper. This was my preferred setup when I was in Windows, but you may consider adopting the Windows Subsystem for Linux as it receives more traction and polish.

Windows Terminal

Microsoft recently released an open source terminal for Windows 10 called Windows Terminal. It lets you work in CMD, PowerShell, and even the Windows Subsystem for Linux. If you need to do a fair amount of shell work in Windows, then Windows Terminal is probably your best bet! Windows Terminal is still in late beta, so it doesn’t ship with Windows yet. Check the documentation for instructions on getting access.

Python Version Management

With your choice of terminal and shell made, you can focus your attention on your Python environment specifically.

Something you’ll eventually run into is the need to run multiple versions of Python. Projects you use may only run on certain versions, or you may be interested in creating a project that supports multiple Python versions. You can configure your Python environment to accommodate these needs.

macOS and most Unix operating systems come with a version of Python installed by default. This is often called the system Python. The system Python works just fine, but it’s usually out of date. As of this writing, macOS High Sierra still ships with Python 2.7.10 as the system Python.

Note: You’ll almost certainly want to install the latest version of Python at a minimum, so you’ll have at least two versions of Python already.

It’s important that you leave the system Python as the default, because many parts of the system rely on the default Python being a specific version. This is one of many great reasons to customize your Python environment!

How do you navigate this? Tooling is here to help.

pyenv

pyenv is a mature tool for installing and managing multiple Python versions. I recommend installing it with Homebrew. After you’ve got pyenv installed, you can install multiple versions of Python into your Python environment with a few short commands:

$ pyenv versions * system $ python --version Python 2.7.10 $ pyenv install 3.7.3 # This may take some time $ pyenv versions * system 3.7.3

You can manage which Python you’d like to use in your current session, globally, or on a per-project basis as well. pyenv will make the python command point to whichever Python you specify. Note that none of these overrides the default system Python for other applications, so you’re safe to use them however they work best for you within your Python environment:

$ pyenv global 3.7.3 $ pyenv versions system * 3.7.3 (set by /Users/dhillard/.pyenv/version) $ pyenv local 3.7.3 $ pyenv versions system * 3.7.3 (set by /Users/dhillard/myproj/.python-version) $ pyenv shell 3.7.3 $ pyenv versions system * 3.7.3 (set by PYENV_VERSION environment variable) $ python --version Python 3.7.3

Because I use a specific version of Python for work, the latest version of Python for personal projects, and multiple versions for testing open source projects, pyenv has proven to be a fairly smooth way for me to manage all these different versions within my own Python environment. See Managing Multiple Python Versions with pyenv for a detailed overview of the tool.

conda

If you’re in the data science community, you might already be using Anaconda (or Miniconda). Anaconda is a sort of one-stop shop for data science software that supports more than just Python.

If you don’t need the data science packages or all the things that come pre-packaged with Anaconda, pyenv might be a better lightweight solution for you. Managing Python versions is pretty similar in each, though. You can install Python versions similarly to pyenv, using the conda command:

$ conda install python=3.7.3

You’ll see a verbose list of all the dependent software conda will install, and it will ask you to confirm.

conda doesn’t have a way to set the “default” Python version or even a good way to see which versions of Python you’ve installed. Rather, it hinges on the concept of “environments,” which you can read more about in the following sections.

Virtual Environments

Now you know how to manage multiple Python versions. Often, you’ll be working on multiple projects that need the same Python version.

Because each project has its own set of dependencies, it’s a good practice to avoid mixing them. If all the dependencies are installed together in a single Python environment, then it will be difficult to discern where each one came from. In the worst cases, two different projects may depend on two different versions of a package, but with Python you can only have one version of a package installed at one time. What a mess!

Enter virtual environments. You can think of a virtual environment as a carbon copy of a base version of Python. If you’ve installed Python 3.7.3, for example, then you can create many virtual environments based off of it. When you install a package in a virtual environment, you do it in isolation from other Python environments you may have. Each virtual environment has its own copy of the python executable.

Tip: Most virtual environment tooling provides a way to update your shell’s command prompt to show the current active virtual environment. Make sure to do this if you frequently switch between projects so you’re sure you’re working inside the correct virtual environment.

venv

venv ships with Python versions 3.3+. You can create virtual environments just by passing it a path at which to store the environment’s python, installed packages, and so on:

$ python -m venv ~/.virtualenvs/my-env

You activate a virtual environment by sourcing its activate script:

$ source ~/.virtualenvs/my-env/bin/activate

You exit the virtual environment using the deactivate command, which is made available when you activate the virtual environment:

(my-env)$ deactivate

venv is built on the wonderful work and successes of the independent virtualenv project. virtualenv still provides a few interesting features of its own, but venv is nice because it provides the utility of virtual environments without requiring you to install additional software. You can probably get pretty far with it if you’re working mostly in a single Python version in your Python environment.

If you’re already managing multiple Python versions (or plan to), then it could make sense to integrate with that tooling to simplify the process of making new virtual environments with specific versions of Python. The pyenv and conda ecosystems both provide ways to specify the Python version to use when you create new virtual environments, covered in the following sections.

pyenv-virtualenv

If you’re using pyenv, then pyenv-virtualenv enhances pyenv with a subcommand for managing virtual environments:

// Create virtual environment $ pyenv virtualenv 3.7.3 my-env // Activate virtual environment $ pyenv activate my-env // Exit virtual environment (my-env)$ pyenv deactivate

I switch contexts between a large handful of projects on a day-to-day basis. As a result, I have at least a dozen distinct virtual environments to manage in my Python environment. What’s really nice about pyenv-virtualenv is that you can configure a virtual environment using the pyenv local command and have pyenv-virtualenv auto-activate the right environments as you switch to different directories:

$ pyenv virtualenv 3.7.3 proj1 $ pyenv virtualenv 3.7.3 proj2 $ cd /Users/dhillard/proj1 $ pyenv local proj1 (proj1)$ cd ../proj2 $ pyenv local proj2 (proj2)$ pyenv versions system 3.7.3 3.7.3/envs/proj1 3.7.3/envs/proj2 proj1 * proj2 (set by /Users/dhillard/proj2/.python-version)

pyenv and pyenv-virtualenv have provided a particularly fluid workflow in my Python environment.

conda

You saw earlier that conda treats environments, rather than Python versions, as the main method of working. conda has built-in support for managing virtual environments:

// Create virtual environment $ conda create --name my-env python=3.7.3 // Activate virtual environment $ conda activate my-env // Exit virtual environment (my-env)$ conda deactivate

conda will install the specified version of Python if it isn’t already installed, so you don’t have to run conda install python=3.7.3 first.

pipenv

pipenv is a relatively new tool that seeks to combine package management (more on this in a moment) with virtual environment management. It mostly abstracts the virtual environment management from you, which can be great as long as things go smoothly:

$ cd /Users/dhillard/myproj // Create virtual environment $ pipenv install Creating a virtualenv for this project… Pipfile: /Users/dhillard/myproj/Pipfile Using /path/to/pipenv/python3.7 (3.7.3) to create virtualenv… ✔ Successfully created virtual environment! Virtualenv location: /Users/dhillard/.local/share/virtualenvs/myproj-nAbMEAt0 Creating a Pipfile for this project… Pipfile.lock not found, creating… Locking [dev-packages] dependencies… Locking [packages] dependencies… Updated Pipfile.lock (a65489)! Installing dependencies from Pipfile.lock (a65489)… 🐍 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 0/0 — 00:00:00 To activate this project's virtualenv, run pipenv shell. Alternatively, run a command inside the virtualenv with pipenv run. // Activate virtual environment (uses a subshell) $ pipenv shell Launching subshell in virtual environment… . /Users/dhillard/.local/share/virtualenvs/test-nAbMEAt0/bin/activate // Exit virtual environment (by exiting subshell) (myproj-nAbMEAt0)$ exit

pipenv does all the heavy lifting of creating a virtual environment and activating it for you. If you look carefully, you can see that it also creates a file called Pipfile. After you first run pipenv install, this file contains just a few things:

[[source]] name = "pypi" url = "https://pypi.org/simple" verify_ssl = true [dev-packages] [packages] [requires] python_version = "3.7"

In particular, note that it shows python_version = "3.7". By default, pipenv creates a virtual Python environment using the same Python version it was installed under. If you want to use a different Python version, then you can create the Pipfile yourself before running pipenv install and specify the version you want. If you have pyenv installed, then pipenv will use it to install the specified Python version if necessary.

Abstracting virtual environment management is a noble goal of pipenv, but it does get hung up with hard-to-read errors occasionally. Give it a try, but don’t worry if you feel confused or overwhelmed by it. The tool, documentation, and community will grow and improve around it as it matures.

To get an in-depth introduction to virtual environments, be sure to read Python Virtual Environments: A Primer.

Package Management

For many of the projects you work on, you’ll probably need some number of third-party packages. Those packages may have their own dependencies in turn. In the early days of Python, using packages involved manually downloading files and pointing Python at them. Today, we’re fortunate to have a variety of package management tools available to us.

Most package managers work in tandem with virtual environments, isolating the packages you install in one Python environment from another. Using the two together is where you really start to see the power of the tools available to you.

pip

pip (pip installs packages) has been the de facto standard for package management in Python for several years. It was heavily inspired by an earlier tool called easy_install. Python incorporated pip into the standard distribution starting in version 3.4. pip automates the process of downloading packages and making Python aware of them.

If you have multiple virtual environments, then you can see that they’re isolated by installing a few packages in one:

$ pyenv virtualenv 3.7.3 proj1 $ pyenv activate proj1 (proj1)$ pip list Package Version ---------- --------- pip 19.1.1 setuptools 40.8.0 (proj1)$ python -m pip install requests Collecting requests Downloading .../requests-2.22.0-py2.py3-none-any.whl (57kB) 100% |████████████████████████████████| 61kB 2.2MB/s Collecting chardet<3.1.0,>=3.0.2 (from requests) Downloading .../chardet-3.0.4-py2.py3-none-any.whl (133kB) 100% |████████████████████████████████| 143kB 1.7MB/s Collecting certifi>=2017.4.17 (from requests) Downloading .../certifi-2019.6.16-py2.py3-none-any.whl (157kB) 100% |████████████████████████████████| 163kB 6.0MB/s Collecting urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 (from requests) Downloading .../urllib3-1.25.3-py2.py3-none-any.whl (150kB) 100% |████████████████████████████████| 153kB 1.7MB/s Collecting idna<2.9,>=2.5 (from requests) Downloading .../idna-2.8-py2.py3-none-any.whl (58kB) 100% |████████████████████████████████| 61kB 26.6MB/s Installing collected packages: chardet, certifi, urllib3, idna, requests Successfully installed packages $ pip list Package Version ---------- --------- certifi 2019.6.16 chardet 3.0.4 idna 2.8 pip 19.1.1 requests 2.22.0 setuptools 40.8.0 urllib3 1.25.3

pip installed requests, along with several packages it depends on. pip list shows you all the currently installed packages and their versions.

Warning: You can uninstall packages using pip uninstall requests, for example, but this will only uninstall requests—not any of its dependencies.

A common way to specify project dependencies for pip is with a requirements.txt file. Each line in the file specifies a package name and, optionally, the version to install:

scipy==1.3.0 requests==2.22.0

You can then run python -m pip install -r requirements.txt to install all of the specified dependencies at once. For more on pip, see What is Pip? A Guide for New Pythonistas.

pipenv

pipenv has most of the same basic operations as pip but thinks about packages a bit differently. Remember the Pipfile that pipenv creates? When you install a package, pipenv adds that package to Pipfile and also adds more detailed information to a new lock file called Pipfile.lock. Lock files act as a snapshot of the precise set of packages installed, including direct dependencies as well as their sub-dependencies.

You can see pipenv sorting out the package management when you install a package:

$ pipenv install requests Installing requests… Adding requests to Pipfile's [packages]… ✔ Installation Succeeded Pipfile.lock (444a6d) out of date, updating to (a65489)… Locking [dev-packages] dependencies… Locking [packages] dependencies… ✔ Success! Updated Pipfile.lock (444a6d)! Installing dependencies from Pipfile.lock (444a6d)… 🐍 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 5/5 — 00:00:00

pipenv will use this lock file, if present, to install the same set of packages. You can ensure that you always have the same set of working dependencies in any Python environment you create using this approach.

pipenv also distinguishes between development dependencies and production (regular) dependencies. You may need some tools during development, such as black or flake8, that you don’t need when you run your application in production. You can specify that a package is for development when you install it:

$ pipenv install --dev flake8 Installing flake8… Adding flake8 to Pipfile's [dev-packages]… ✔ Installation Succeeded ...

pipenv install (without any arguments) will only install your production packages by default, but you can tell it to install development dependencies as well with pipenv install --dev.

poetry

poetry addresses additional facets of package management, including creating and publishing your own packages. After installing poetry, you can use it to create a new project:

$ poetry new myproj Created package myproj in myproj $ ls myproj/ README.rst myproj pyproject.toml tests

Similarly to how pipenv creates the Pipfile, poetry creates a pyproject.toml file. This recent standard contains metadata about the project as well as dependency versions:

[tool.poetry] name = "myproj" version = "0.1.0" description = "" authors = ["Dane Hillard <github@danehillard.com>"] [tool.poetry.dependencies] python = "^3.7" [tool.poetry.dev-dependencies] pytest = "^3.0" [build-system] requires = ["poetry>=0.12"] build-backend = "poetry.masonry.api"

You can install packages with poetry add (or as development dependencies with poetry add --dev):

$ poetry add requests Using version ^2.22 for requests Updating dependencies Resolving dependencies... (0.2s) Writing lock file Package operations: 5 installs, 0 updates, 0 removals - Installing certifi (2019.6.16) - Installing chardet (3.0.4) - Installing idna (2.8) - Installing urllib3 (1.25.3) - Installing requests (2.22.0)

poetry also maintains a lock file, and it has a benefit over pipenv because it keeps track of which packages are subdependencies. As a result, you can uninstall requests and its dependencies with poetry remove requests.

conda

With conda, you can use pip to install packages as usual, but you can also use conda install to install packages from different channels , which are collections of packages provided by Anaconda or other providers. To install requests from the conda-forge channel, you can run conda install -c conda-forge requests.

Learn more about package management in conda in Setting Up Python for Machine Learning on Windows.

Python Interpreters

If you’re interested in further customization of your Python environment, you can choose the command line experience you have when interacting with Python. The Python interpreter provides a read-eval-print loop (REPL), which is what comes up when you type python with no arguments in your shell:

>>>Python 3.7.3 (default, Jun 17 2019, 14:09:05) [Clang 10.0.1 (clang-1001.0.46.4)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> 2 + 2 4 >>> exit()

The REPL reads what you type, evaluates it as Python code, and prints the result. Then it waits to do it all over again. This is about as much as the default Python REPL provides, which is sufficient for a good portion of typical work.

IPython

Like Anaconda, IPython is a suite of tools supporting more than just Python, but one of its main features is an alternative Python REPL. IPython’s REPL numbers each command and explicitly labels each command’s input and output. After installing IPython (python -m pip install ipython), you can run the ipython command in place of the python command to use the IPython REPL:

>>>Python 3.7.3 Type 'copyright', 'credits' or 'license' for more information IPython 6.0.0.dev -- An enhanced Interactive Python. Type '?' for help. In [1]: 2 + 2 Out[1]: 4 In [2]: print("Hello!") Out[2]: Hello!

IPython also supports Tab completion, more powerful help features, and strong integration with other tooling such as matplotlib for graphing. IPython provided the foundation for Jupyter, and both have been used extensively in the data science community because of their integration with other tools.

The IPython REPL is highly configurable too, so while it falls just shy of being a full development environment, it can still be a boon to your productivity. Its built-in and customizable magic commands are worth checking out.

bpython

bpython is another alternative REPL that provides inline syntax highlighting, tab completion, and even auto-suggestions as you type. It provides quite a few of the quick benefits of IPython without altering the interface much. Without the weight of the integrations and so on, bpython might be good to add to your repertoire for a while to see how it improves your use of the REPL.

Text Editors

You spend a third of your life sleeping, so it makes sense to invest in a great bed. As a developer, you spend a great deal of your time reading and writing code, so it follows that you should invest time in setting up your Python environment’s text editor just the way you like it.

Each editor offers a different set of key bindings and model for manipulating text. Some require a mouse to interact with them effectively, whereas others can be controlled with only the keyboard. Some people consider their choice of text editor and customizations some of the most personal decisions they make!

There are so many options to choose from in this arena, so I won’t attempt to cover it in detail here. Check out Python IDEs and Code Editors (Guide) for a broad overview. A good strategy is to find a simple, small text editor for quick changes and a full-featured IDE for more involved work. Vim and PyCharm, respectively, are my editors of choice.

Python Environment Tips and Tricks

Once you’ve made the big decisions about your Python environment, the rest of the road is paved with little tweaks to make your life a little easier. These tweaks each save minutes or seconds alone, but they collectively save you hours of time.

Making a certain activity easier reduces your cognitive load so you can focus on the task at hand instead of the logistics surrounding it. If you notice yourself performing an action over and over, then consider automating it. Use this wonderful chart from XKCD to determine if it’s worth automating a particular task.

Here are a few final tips.

Know your current virtual environment

As mentioned earlier, it’s a great idea to display the active Python version or virtual environment in your command prompt. Most tools will do this for you, but if not (or if you want to customize the prompt), the value is usually contained in the VIRTUAL_ENV environment variable.

Disable unnecessary, temporary files

Have you ever noticed *.pyc files all over your project directories? These files are pre-compiled Python bytecode—they help Python start your application faster. In production, these are a great idea because they’ll give you some performance gain. During local development, however, they’re rarely useful. Set PYTHONDONTWRITEBYTECODE=1 to disable this behavior. If you find use cases for them later, then you can easily remove this from your Python environment.

Customize your Python interpreter

You can affect how the REPL behaves using a startup file. Python will read this startup file and execute the code it contains before entering the REPL. Set the PYTHONSTARTUP environment variable to the path of your startup file. (Mine’s at ~/.pystartup.) If you’d like to hit Up for command history and Tab for completion like your shell provides, then give this startup file a try.

Conclusion

You learned about many facets of the typical Python environment. Armed with this knowledge, you can:

  • Choose a terminal with the aesthetics and enhanced features you like
  • Choose a shell with as many (or as few) customization options as you need
  • Manage multiple versions of Python on your system
  • Manage multiple projects that use a single version of Python, using virtual Python environments
  • Install packages in your virtual environments
  • Choose a REPL that suits your interactive coding needs

When you’ve got your Python environment just so, I hope you’ll share screenshots, screencasts, or blog posts about your perfect setup ✨

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Drudesk: Drupal 8 for marketers: key benefits & useful modules

Planet Drupal - Wed, 2019-08-14 09:51

It‘s easy and enjoyable to create marketing campaigns, drive leads, and tell your brand’s story to the world if your website is on the right CMS. Drupal 8’s benefits will definitely impress any marketer. So let’s take a closer look at the greatness of Drupal 8 for marketers, see what makes it so valuable, and name a few useful modules.

Categories: FLOSS Project Planets

Catalin George Festila: Python 3.7.3 : Using the flask - part 014.

Planet Python - Wed, 2019-08-14 06:20
Today I worked on YouTube search with flask and Google A.P.I. project. The source code is simple to understand and you can test any A.P.I. from google using this way. I created a new Google project with YouTube A.P.I. version 3 and with the A.P.I. key. I use this key to connect with flask python module. I used the isodate python module. You can see the source code on my GitHub repo named flask_yt
Categories: FLOSS Project Planets

Introducing Qt Quick 3D: A high-level 3D API for Qt Quick

Planet KDE - Wed, 2019-08-14 06:03

As Lars mentioned in his Technical Vision for Qt 6 blog post, we have been researching how we could have a deeper integration between 3D and Qt Quick. As a result we have created a new project, called Qt Quick 3D, which provides a high-level API for creating 3D content for user interfaces from Qt Quick. Rather than using an external engine which can lead to animation synchronization issues and several layers of abstraction, we are providing extensions to the Qt Quick Scenegraph for 3D content, and a renderer for those extended scene graph nodes.

Does that mean we wrote yet another 3D Solution for Qt?  Not exactly, because the core spatial renderer is derived from the Qt 3D Studio renderer. This renderer was ported to use Qt for its platform abstraction and refactored to meet Qt project coding style.

“San Miguel” test scene running in Qt Quick 3D

What are our Goals?  Why another 3D Solution? Unified Graphics Story

The single most important goal is that we want to unify our graphics story. Currently we are offering two comprehensive solutions for creating fluid user interfaces, each having its own corresponding tooling.  One of these solutions is Qt Quick, for 2D, the other is Qt 3D Studio, for 3D.  If you limit yourself to using either one or the other, things usually work out quite fine.  However, what we found is that users typically ended up needing to mix and match the two, which leads to many pitfalls both in run-time performance and in developer/designer experience.

Therefore, and for simplicity’s sake, we aim have one runtime (Qt Quick), one common scene graph (Qt Quick Scenegraph), and one design tool (Qt Design Studio).  This should present no compromises in features, performance or the developer/designer experience. This way we do not need to further split our development focus between more products, and we can deliver more features and fixes faster.

Intuitive and Easy to Use API

The next goal is for Qt Quick 3D is to provide an API for defining 3D content, an API that is approachable and usable by developers without the need to understand the finer details of the modern graphics pipeline.  After all, the majority of users do not need to create specialized 3D graphics renderers for each of their applications, but rather just want to show some 3D content, often alongside 2D.  So we have been developing Qt Quick 3D with this perspective in mind.

That being said, we will be exposing more and more of the rendering API over time which will make more advanced use cases, needed by power-users, possible.

At the time of writing of this post we are only providing a QML API, but the goal in the future is to provide a public C++ API as well.

Unified Tooling for Qt Quick

Qt Quick 3D is intended to be the successor to Qt 3D Studio.  For the time being Qt 3D Studio will still continue to be developed, but long-term will be replaced by Qt Quick and Qt Design Studio.

Here we intend to take the best parts of Qt 3D Studio and roll them into Qt Quick and Qt Design Studio.  So rather than needing a separate tool for Qt Quick or 3D, it will be possible to just do both from within Qt Design Studio.  We are working on the details of this now and hope to have a preview of this available soon.

For existing users of Qt 3D Studio, we have been working on a porting tool to convert projects to Qt Quick 3D. More on that later.

First Class Asset Conditioning Pipeline

When dealing with 3D scenes, asset conditioning becomes more important because now there are more types of assets being used, and they tend to be much bigger overall.  So as part of the Qt Quick 3D development effort we have been looking at how we can make it as easy as possible to import your content and bake it into efficient runtime formats for Qt Quick.

For example, at design time you will want to specify the assets you are using based on what your asset creation tools generate (like FBX files from Maya for 3D models, or PSD files from Photoshop for textures), but at runtime you would not want the engine to use those formats.  Instead, you will want to convert the assets into some efficient runtime format, and have them updated each time the source assets change.  We want this to be an automated process as much as possible, and so want to build this into the build system and tooling of Qt.

Cross-platform Performance and Compatibility

Another of our goals is to support multiple native graphics APIs, using the new Rendering Hardware Interface being added to Qt. Currently, Qt Quick 3D only supports rendering using OpenGL, like many other components in Qt. However, in Qt 6 we will be using the QtRHI as our graphics abstraction and there we will be able to support rendering via Vulkan, Metal and Direct3D as well, in addition to OpenGL.

What is Qt Quick 3D? (and what it is not)

Qt Quick 3D is not a replacement for Qt 3D, but rather an extension of Qt Quick’s functionality to render 3D content using a high-level API.

Here is what a very simple project with some helpful comments looks like:

import QtQuick 2.12 import QtQuick.Window 2.12 import QtQuick3D 1.0 Window { id: window visible: true width: 1280 height: 720 // Viewport for 3D content View3D { id: view anchors.fill: parent // Scene to view Node { id: scene Light { id: directionalLight } Camera { id: camera // It's important that your camera is not inside // your model so move it back along the z axis // The Camera is implicitly facing up the z axis, // so we should be looking towards (0, 0, 0) z: -600 } Model { id: cubeModel // #Cube is one of the "built-in" primitive meshes // Other Options are: // #Cone, #Sphere, #Cylinder, #Rectangle source: "#Cube" // When using a Model, it is not enough to have a // mesh source (ie "#Cube") // You also need to define what material to shade // the mesh with. A Model can be built up of // multiple sub-meshes, so each mesh needs its own // material. Materials are defined in an array, // and order reflects which mesh to shade // All of the default primitive meshes contain one // sub-mesh, so you only need 1 material. materials: [ DefaultMaterial { // We are using the DefaultMaterial which // dynamically generates a shader based on what // properties are set. This means you don't // need to write any shader code yourself. // In this case we just want the cube to have // a red diffuse color. id: cubeMaterial diffuseColor: "red" } ] } } } }

The idea is that defining 3D content should be as easy as 2D.  There are a few extra things you need, like the concepts of Lights, Cameras, and Materials, but all of these are high-level scene concepts, rather than implementation details of the graphics pipeline.

This simple API comes at the cost of less power, of course.  While it may be possible to customize materials and the content of the scene, it is not possible to completely customize how the scene is rendered, unlike in Qt 3D via the its customizable framegraph.  Instead, for now there is a fixed forward renderer, and you can define with properties in the scene how things are rendered.  This is like other existing engines, which typically have a few possible rendering pipelines to choose from, and those then render the logical scene.

A Camera orbiting around a Car Model in a Skybox with Axis and Gridlines (note: stutter is from the 12 FPS GIF )

What Can You Do with Qt Quick 3D?

Well, it can do many things, but these are built up using the following scene primitives:

Node

Node is the base component for any node in the 3D scene.  It represents a transformation in 3D space, and but is non-visual.  It works similarly to how the Item type works in Qt Quick.

Camera

Camera represents how a scene is projected to a 2D surface. A camera has a position in 3D space (as it is a Node subclass) and a projection.  To render a scene, you need to have at least one Camera.

Light

The Light component defines a source of lighting in the scene, at least for materials that consider lighting.  Right now, there are 3 types of lights: Directional (default), Point and Area.

Model

The Model component is the one visual component in the scene.  It represents a combination of geometry (from a mesh) and one or more materials.

The source property of the Mesh component expects a .mesh file, which is the runtime format used by Qt Quick 3D.  To get mesh files, you need to convert 3D models using the asset import tool.  There are also a few built-in primitives. These can be used by setting the following values to the source property: #Cube, #Cylinder, #Sphere, #Cone, or #Rectangle.

We will also be adding a programmatic way to define your own geometry at runtime, but that is not yet available in the preview.

Before a Model can be rendered, it must also have a Material. This defines how the mesh is shaded.

DefaultMaterial and Custom Materials

The DefaultMaterial component is an easy to use, built-in material.  All you need to do is to create this material, set the properties you want to define, and under the hood all necessary shader code will be automatically generated for you.  All the other properties you set on the scene are taken into consideration as well. There is no need to write any graphics shader code (such as, vertex or fragment shaders) yourself.

It is also possible to define so-called CustomMaterials, where you do provide your own shader code.  We also provide a library of pre-defined CustomMaterials you can try out by just adding the following to your QML imports:

import QtQuick3D.MaterialLibrary 1.0

Texture

The Texture component represents a texture in the 3D scene, as well as how it is mapped to a mesh.  The source for a texture can either be an image file, or a QML Component.

A Sample of the Features Available 3D Views inside of Qt Quick

To view 3D content inside of Qt Quick, it is necessary to flatten it to a 2D surface.  To do this, you use the View3D component.  View3D is the only QQuickItem-based component in the whole API.  You can either define the scene as a child of the View3D or reference an existing scene by setting the scene property to the root Node of the scene you want to renderer.

If you have more than one camera, you can also set which camera you want to use to render the scene.  By default, it will just use the first active camera defined in the scene.

Also it is worth noting that View3D items do not necessarily need to be rendered to off-screen textures before being rendered.  It is possible to set one of the 4 following render modes to define when the 3D content is rendered:

  1. Texture: View3D is a Qt Quick texture provider and renders content to an texture via an FBO
  2. Underlay: View3D is rendered before Qt Quick’s 2D content is rendered, directly to the window (3D is always under 2D)
  3. Overlay: View3D is rendered after Qt Quick’s 2D content is rendered, directly to the window (3D is always over 2D)
  4. RenderNode: View3D is rendered in-line with the Qt Quick 2D content.  This can however lead to some quirks due to how Qt Quick 2D uses the depth buffer in Qt 5.

 

2D Views inside of 3D

It could be that you also want to render Qt Quick content inside of a 3D scene.  To do so, anywhere where an Texture is taken as a property value (for example, in the diffuseMap property of default material), you can use a Texture with its sourceItem property set, instead of just specifying a file in the source property. This way the referenced Qt Quick item will be automatically rendered and used as a texture.

The diffuse color textures being mapped to the cubes are animated Qt Quick 2D items.

3D QML Components

Due to Qt Quick 3D being built on QML, it is possible to create reusable components for 3D as well.  For example, if you create a Car model consisting of several Models, just save it to Car.qml. You can then instantiate multiple instance of Car by just reusing it, like any other QML type. This is very important because this way 2D and 3D scenes can be created using the same component model, instead of having to deal with different approaches for the 2D and 3D scenes.

Multiple Views of the Same Scene

Because scene definitions can exist anywhere in a Qt Quick project, its possible to reference them from multiple View3Ds.  If you had multiple cameras in a scene, you could even render from each one to a different View3D.

4 views of the same Teapot scene. Also changing between 3 Cameras in the Perspective view.

Shadows

Any Light component can specify that it is casting shadows.  When this is enabled, shadows are automatically rendered in the scene.  Depending on what you are doing though, rendering shadows can be quite expensive, so you can fine-tune which Model components cast and receive shadows by setting additional properties on the Model.

Image Based Lighting

In addition to the standard Light components, its possible to light your scene by defining a HDRI map. This Texture can be set either for the whole View3D in its SceneEnvironment property, or on individual Materials.

Animations

Animations in Qt Quick 3D use the same animation system as Qt Quick.  You can bind any property to an animator and it will be animated and updated as expected. Using the QtQuickTimeline module it is also possible to use keyframe-based animations.

Like the component model, this is another important step in reducing the gap between 2D and 3D scenes, as no separate, potentially conflicting animation systems are used here.

Currently there is no support for rigged animations, but that is planned in the future.

How Can You Try it Out?

The intention is to release Qt Quick 3D as a technical preview along with the release of Qt 5.14.  In the meantime it should be possible to use it already now, against Qt 5.12 and higher.

To get the code, you just need to build the QtQuick3D module which is located here:

https://git.qt.io/annichol/qtquick3d

What About Tooling?

The goal is that it should be possible via Qt Design Studio to do everything you need to set up a 3D scene. That means being able to visually lay out the scene, import 3D assets like meshes, materials, and textures, and convert those assets into efficient runtime formats used by the engine.

A demonstration of early Qt Design Studio integration for Qt Quick 3D

Importing 3D Scenes to QML Components

Qt Quick 3D can also be used by writing QML code manually. Therefore, we also have some stand-alone utilities for converting assets.  Once such tool is the balsam asset conditioning tool.  Right now it is possible to feed this utility an asset from a 3D asset creation tool like Blender, Maya, or 3DS Max, and it will generate a QML component representing the scene, as well as any textures, meshes, and materials it uses.  Currently this tool supports generating scenes from the following formats:

  • FBX
  • Collada (dae)
  • OBJ
  • Blender (blend)
  • GLTF2

To convert the file myTestScene.fbx you would run:

./balsam -o ~/exportDirectory myTestScene.fbx

Thiswould generate a file called MyTestScene.qml together with any assets needed. Then you can just use it like any other Component in your scene:

import QtQuick 2.12 import QtQuick.Window 2.12 import QtQuick3D 1.0 Window { width: 1920 height: 1080 visible: true color: "black" Node { id: sceneRoot Light { } Camera { z: -100 } MyTestScene { } } View3D { anchors.fill: parent scene: sceneRoot } }

We are working to improve the assets generated by this tool, so expect improvements in the coming months.

Converting Qt 3D Studio Projects

In addition to being able to generate 3D QML components from 3D asset creation tools, we have also created a plugin for our asset import tool to convert existing Qt 3D Studio projects.  If you have used Qt 3D Studio before, you will know it generates projects in XML format to define the scene.  If you give the balsam tool a UIP or UIA project generated by Qt 3D Studio, it will also generate a Qt Quick 3D project based on that.  Note however that since the runtime used by Qt 3D Studio is different from Qt Quick 3D, not everything will be converted. It should nonetheless give a good approximation or starting point for converting an existing project.  We hope to continue improving support for this path to smooth the transition for existing Qt 3D Studio users.

Qt 3D Studio example application ported using Qt Quick 3D’s import tool. (it’s not perfect yet)

What About Qt 3D?

The first question I expect to get is why not just use Qt 3D?  This is the same question we have been exploring the last couple of years.

One natural assumption is that we could just build all of Qt Quick on top of Qt 3D if we want to mix 2D and 3D.  We intended to and started to do this with the 2.3 release of Qt 3D Studio.  Qt 3D’s powerful API provided a good abstraction for implementing a rendering engine to re-create the behavior expected by Qt Quick and Qt 3D Studio. However, Qt 3D’s architecture makes it difficult to get the performance we needed on an entry level embedded hardware. Qt 3D also comes with a certain overhead from its own limited runtime as well as from being yet another level of abstraction between Qt Quick and the graphics hardware.  In its current form, Qt 3D is not ideal to build on if we want to reach a fully unified graphics story while ensuring continued good support for a wide variety of platforms and devices ranging from low to high end.

At the same time, we already had a rendering engine in Qt 3D Studio that did exactly what we needed, and was a good basis for building additional functionally.  This comes with the downside that we no longer have the powerful APIs that come with Qt 3D, but in practice once you start building a runtime on top of Qt 3D, you already end up making decisions about how things should work, leading to a limited ability to customize the framegraph anyway. In the end the most practical decision was to use the existing Qt 3D Studio rendering engine as our base, and build off of that.

What is the Plan Moving Forward?

This release is just a preview of what is to come.  The plan is to provide Qt Quick 3D as a fully supported module along with the Qt 5.15 LTS.  In the meantime we are working on further developing Qt Quick 3D for release as a Tech Preview with Qt 5.14.

For the Qt 5 series we are limited in how deeply we can combine 2D and 3D because of binary compatibility promises.  With the release of Qt 6 we are planning an even deeper integration of Qt Quick 3D into Qt Quick to provide an even smoother experience.

The goal here is that we want to be able to be as efficient as possible when mixing 2D and 3D content, without introducing any additional overhead to users who do not use any 3D content at all.  We will not be doing anything drastic like forcing all Qt Quick apps to go through the new renderer, only ones who are mixing 2D and 3D.

In Qt 6 we will also be using the Qt Rendering Hardware Interface to render Qt Quick (including 3D) scenes which should eliminate many of the current issues we have today with deployment of OpenGL applications (by using DirectX on Windows, Metal on macOS, etc.).

We also want to make it possible for end users to use the C++ Rendering API we have created more generically, without Qt Quick.  The code is there now as private API, but we are waiting until the Qt 6 time-frame (and the RHI porting) before we make the compatibility promises that come with public APIs.

Feedback is Very Welcome!

This is a tech preview, so much of what you see now is subject to change.  For example, the API is a bit rough around the edges now, so we would like to know what we are missing, what doesn’t make sense, what works, and what doesn’t. The best way to provide this feedback is through the Qt Bug Tracker.  Just remember to use the Qt Quick: 3D component when filing your bugs/suggestions.

The post Introducing Qt Quick 3D: A high-level 3D API for Qt Quick appeared first on Qt Blog.

Categories: FLOSS Project Planets

Dropsolid: Debugging segmentation faults in Drupal

Planet Drupal - Wed, 2019-08-14 04:15
14 Aug

Segfaults in Drupal are not a common occurrence - but when they do pop up, they can pose some tricky challenges... Our DevOps Engineer Mattias Michaux sheds some light on how to debug segmentation faults.
 

As a Drupal web developer, it can be frightening to encounter a so-called segmentation fault or segfault. This is a type of failure raised by hardware with memory protection, notifying you that the software has attempted to access a restricted area of memory. This kind of fault is common in languages with low-level memory management, such as C. Coding in a scripting language like PHP usually implies that you will be spared from segfaults, but on rare occasions these kind of errors can still pop up. And if they do, they tend to leave no stack trace or meaningful error clue in the PHP log. This is because segfault errors originate in the layers located below the PHP engine. Let me talk you through a recent consulting project that involved a curious segmentation fault.
 

Segfault case study

Not so long ago, one of our clients started experiencing seemingly random ‘503 service unavailable’ errors, at random times on random pages. That’s plenty of randomness, without much of a clue to start from.
 

The segfaults had started occurring after an update from PHP 5.6 to PHP 7.2 and the switch from mod_php - PHP is executed as a module within the Apache process -  to PHP-FPM. Just to clarify: PHP runs as a standalone service that Apache connects to.
 

We tried to reproduce the errors on a copy of the site and infrastructure of the project. The segfault was only reproducible when we ran a scraping tool on the site. There was no apparent connection between the pages it occurred on, and the error came up with less than 1% of the total requests. Looking at the logs, there seemed to be no direct cause for this error.

 

 

[proxy_fcgi:error] [pid 10272] AH01067: Failed to read FastCGI header [proxy_fcgi:error] [pid 10272] (104)Connection reset by peer: AH01075: Error dispatching request to ****

Example of the unhelpful Apache error message

 

 

The message above only tells us that something went wrong, but it provides no indication as to what the cause might be. Next, the PHP-FPM log indicated a segmentation fault:

WARNING: [pool web] child 3824 exited on signal 11 (SIGSEGV) after 3.353763 seconds from start

 

The syslog entry didn’t turn out to be very helpful either:

kernel: [4734894.041892] traps: php-fpm7.2[3760] general protection ip:555ce4cb6342 sp:7ffccab418c8 error:0 kernel: [4734894.041897] in php-fpm7.2[555ce4a51000+411000]

 

To find out what the root cause might be, we needed to recompile PHP with debugging enabled. This allowed us to produce core dumps, which contain a full backtrace. A backtrace is a summary of how the program got to a particular point. It displays one line per frame for many frames, starting with the frame that is currently being executed. I suggest finding a guide like this one on how to compile your php with debugging enabled.
 

After compiling PHP, we copied the php.ini and pool config from the original PHP version to the freshly compiled one. Next, we altered the config, so now the PHP-FPM pool used a separate name and socket path, and we changed that socket in the vhost. After starting the new PHP-FPM instance and running the crawler to reproduce the issue, we quickly saw the same 503 errors showing up, but this time with a core dump. We started comparing the dumps and tried to find a pattern. Fortunately, most cases pointed to the same thing: the unserialize() function of large objects. This made us look for specific PHP bugs that involved unserializing or memory allocation. We found an interesting one - in fact, someone had encountered almost the same issue before.
 

We decided to try the same approach by disabling garbage collection in the settings file:

ini_set('zend.enable_gc', 0);

 

We reran the scraper for multiple hours and no issues occurred. There are other documented cases where PHP’s garbage collection interferes with Drupal’s processing of huge amounts of data and objects.  Because disabling the garbage collection globally could have negative effects on the resource consumption of the live site, we also looked for a way to only disable garbage collection partially. This is indeed possible by patching  includes/cache.inc so _cache_get_object doesn’t run gc while fetching data.
 

This puzzling problem took several hours and two people to solve, but in the end we managed to diagnose correctly and we implemented a solid solution. An interesting case that led us to explore unusual parts of Drupal - and possibly something to bear in mind next time your Drupal environment produces a similar type of error.
 

gdb /opt/php/7.2/sbin/php-fpm /tmp/coredump-php-fpm.28651 -ex "source /opt/php-src/php7.2-build/php-7.2.17/.gdbinit" -ex "zbacktrace" --batch | grep "\[0x" [0x7f2c2b4220e0] unserialize("O:4:"view":49:{s:8:"db_table";s:10:"views_view";s:10:"base_table";s:18:"commerce_line_item";s:10:"base_field";s:3:"nid";s:4:"name";s:25:"commerce_reports_products";s:3:"vid";s:0:"";s:11:"description";s:0:"";s:3:"tag";s:16:"commerce_reports";s:10:"human_nam...") [internal function] [0x7f2c2b421f40] DrupalDatabaseCache->prepareItem(object[0x7f2c2b421f90]) /home/project/www/includes/cache.inc:520 [0x7f2c2b421d80] DrupalDatabaseCache->getMultiple(reference) /home/project/www/includes/cache.inc:433 [0x7f2c2b421cf0] cache_get_multiple(reference, "cache_views") /home/project/www/includes/cache.inc:113 [0x7f2c2b421a70] _ctools_export_get_defaults_from_cache("views_view", array(24)[0x7f2c2b421ad0]) /home/project/www/sites/all/modules/contrib/ctools/includes/export.inc:746 [0x7f2c2b421390] _ctools_export_get_defaults("views_view", array(24)[0x7f2c2b4213f0]) /home/project/www/sites/all/modules/contrib/ctools/includes/export.inc:649 [0x7f2c2b4211a0] _ctools_export_get_some_defaults("views_view", array(24)[0x7f2c2b421200], array(1)[0x7f2c2b421210]) /home/project/www/sites/all/modules/contrib/ctools/includes/export.inc:783 [0x7f2c2b420210] ctools_export_load_object("views_view", "names", array(1)[0x7f2c2b420280]) /home/project/www/sites/all/modules/contrib/ctools/includes/export.inc:493 [0x7f2c2b420080] ctools_export_crud_load("views_view", "search_results") /home/project/www/sites/all/modules/contrib/ctools/includes/export.inc:81 [0x7f2c2b41ff50] views_get_view("search_results") /home/project/www/sites/all/modules/contrib/views/views.module:1683 [0x7f2c2b41fb50] views_block_view("-exp-search_results-page") /home/project/www/sites/all/modules/contrib/views/views.module:772 [0x7f2c2b41fa60] module_invoke("views", "block_view", array(1)[0x7f2c2b41fad0]) /home/project/www/includes/module.inc:934 [0x7f2c2b41f410] _block_render_blocks(array(0)[0x7f2c2b41f460]) /home/project/www/modules/block/block.module:911 [0x7f2c2b41f2d0] block_list("branding") /home/project/www/modules/block/block.module:690 [0x7f2c2b41f200] block_get_blocks_by_region("branding") /home/project/www/modules/block/block.module:319 [0x7f2c2b41eee0] block_page_build(reference) /home/project/www/modules/block/block.module:270 [0x7f2c2b41ecf0] drupal_render_page(reference) /home/project/www/includes/common.inc:5914 [0x7f2c2b41e570] drupal_deliver_html_page(array(1)[0x7f2c2b41e5c0]) /home/project/www/includes/common.inc:2761 [0x7f2c2b41e3d0] drupal_deliver_page(array(1)[0x7f2c2b41e420], "") /home/project/www/includes/common.inc:2634 [0x7f2c2b41e100] menu_execute_active_handler() /home/project/www/includes/menu.inc:542 [0x7f2c2b41e030] (main) /home/project/www/index.php:21

 

dropsolid8
Categories: FLOSS Project Planets

Pages