FLOSS Project Planets

Brian Perry: How Drupal's Preview Works

Planet Drupal - Thu, 2021-10-07 21:28

I've been thinking quite a bit recently about Drupal's options for decoupled preview with other JavaScript front ends. As part of some related experimentation, I found myself needing to understand more about how Drupal's standard preview functionality works. To be specific here - I'm talking about when you're editing a node and click on the preview button to see a full rendering of the page you're currently editing.

I realized I had never really had any reason to think about how that actually happens. Like many things on the web, it just kind of magically does.

Categories: FLOSS Project Planets

GCC, Clang[d], LSP client, Kate and variadic macro warnings, a short story

Planet KDE - Thu, 2021-10-07 20:00

Kate has had an LSP plugin for sometime now, which uses Clangd. It's a great plugin that brings many code navigation/validation features, akin to those available in Qt Creator and KDevelop.

So naturally since I got it to work, I've been using it. At some point I found out about the Diagnostics tab in the LSP Client tool view in Kate, which displays useful information; however I also saw that it was plagued by a spam of the following warnings:

[clang] Must specify at least one argument for '...' parameter of variadic macro [qloggingcategory.h:121] Macro 'qCDebug' defined here

which is really annoying to say the least, as it adds needless noise.

I just ignored it and moved on; then, by accident, while searching for something in the Extra CMake Module KDE repo I found this:

if(CMAKE_CXX_COMPILER_ID MATCHES "Clang") # -Wgnu-zero-variadic-macro-arguments (part of -pedantic) is triggered by every qCDebug() call and therefore results # in a lot of noise. This warning is only notifying us that clang is emulating the GCC behaviour # instead of the exact standard wording so we can safely ignore it set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-gnu-zero-variadic-macro-arguments") endif()

this explains why those warnings are shown. Adding -Wno-gnu-zero-variadic-macro-arguments to my build flags, I use GCC by default, indeed made those warnings stop. But then GCC started complaining about an unrecognised build flag, which is correct, given that that flag is for Clang.

I started searching for a way to pass that compilation flag to Clangd without involving GCC, and I found this, which led me to this.

So, the solution is to create a .clangd file in your repo's top level directory (I created it in the parent dir to my KDE Frameworks git checkouts, this way it affects all of them as Clangd searches for that file in all the parent directories of the current source file), and put this in it:

CompileFlags: Add: [-Wno-gnu-zero-variadic-macro-arguments]

The End.

Feel free to tell me about any corrections in my posts, you can send me an email, or better still, use a GitHub issue.

Categories: FLOSS Project Planets

Reproducible Builds (diffoscope): diffoscope 187 released

Planet Debian - Thu, 2021-10-07 20:00

The diffoscope maintainers are pleased to announce the release of diffoscope version 187. This version includes the following changes:

* Add support for comparing .pyc files. Thanks to Sergei Trofimovich. (Closes: reproducible-builds/diffoscope#278)

You find out more by visiting the project homepage.

Categories: FLOSS Project Planets

Python Engineering at Microsoft: Python in Visual Studio Code – October 2021 Release

Planet Python - Thu, 2021-10-07 18:20

We are pleased to announce that the October 2021 release of the Python Extension for Visual Studio Code is now available. You can download the Python extension from the Marketplace, or install it directly from the extension gallery in Visual Studio Code. If you already have the Python extension installed, you can also get the latest update by restarting Visual Studio Code. You can learn more about Python support in Visual Studio Code in the documentation.

In this release we closed a total of 88 issues, and it includes:

  • Debugging support for Jupyter Notebooks
  • A new Python walkthrough
  • Improvements to the debugging experience for Python files and projects

If you’re interested, you can check the full list of improvements in our changelog.

Debugging support for Jupyter Notebooks

We’re excited to announce that you can now debug your Python cells on Jupyter notebooks!

To try it out, make sure you have ipykernel v6+ installed in your selected kernel. Then set a breakpoint, select the Debug Cell command from the drop-down menu next to the play button and start inspecting your code with all the features the debugger has to offer!

New Python walkthrough

We’re excited to announce that this release includes a walkthrough with some basic set up steps to improve the getting started experience for Python in VS Code.

The walkthrough covers steps such as Python installation, selection of an interpreter for your project and how to run and debug Python files in VS Code. We hope this will be a quick and helpful guide for those learning or starting Python for the first time in VS Code!

Improvements in the debugging experience for Python projects

When working with workspaces with no launch.json configuration present, the Python extension would display a debugger configuration menu every time you would debug your Python file or project. This could be particularly annoying when debugging a web application with custom arguments (like Flask, Django or FastAPI, for example).

Now you no longer need to provide a configuration every time you start debugging, as the first selection you make is reused for the rest of the session.

Other Changes and Enhancements

We have also added small enhancements and fixed issues requested by users that should improve your experience working with Python in Visual Studio Code. Some notable changes include:

  • python.testing.cwd setting is no longer ignored when discovering or running tests. (#8678)
  • Upgraded to Jedi 0.18 and enabled it behind the language server protocol. Remove Jedi-related settings python.jediMemoryLimit and python.jediPath, since they are not used with the new language server implementation. (#11995)
  • Removed support for rope, ctags and pylintMinimalCheckers setting. Refactoring, syntax errors and workspace symbols are now supported via language servers. (#10440, #13321, #16063)
  • Default value of python.linting.pylintEnabled has been changed to false. (#3007)

Special thanks to this month’s contributors:

  • Anupama Nadig: Fix casing of text in unittest patterns quickpick. (#17093)
  • Erik Demaine: Improve pattern matching for shell detection on Windows. (#17426)
  • ilexei: Changed the way of searching left bracket [ in case of subsets of tests. (#17461)

Be sure to download the Python extension for Visual Studio Code now to try out the above improvements. If you run into any problems or have suggestions, please file an issue on the Python VS Code GitHub page.

The post Python in Visual Studio Code – October 2021 Release appeared first on Python.

Categories: FLOSS Project Planets

Test and Code: 165: pytest xfail policy and workflow

Planet Python - Thu, 2021-10-07 18:15

A discussion of how to use the xfail feature of pytest to help with communication on software projects.

The episode covers:

  • What is xfail
  • Why I use it
  • Using reason effectively by including issue tracking numbers
  • Using xfail_strict
  • Adding --runxfail when transitioning from development to feature freeze
  • What to do about test failures
  • How all of this might help with team communication

Sponsored By:

Support Test & Code

<p>A discussion of how to use the xfail feature of pytest to help with communication on software projects.</p> <p>The episode covers:</p> <ul> <li>What is xfail</li> <li>Why I use it</li> <li>Using <code>reason</code> effectively by including issue tracking numbers</li> <li>Using <code>xfail_strict</code></li> <li>Adding <code>--runxfail</code> when transitioning from development to feature freeze</li> <li>What to do about test failures</li> <li>How all of this might help with team communication</li> </ul><p>Sponsored By:</p><ul><li><a href="https://www.patreon.com/testpodcast" rel="nofollow">Patreon Supporters</a>: <a href="https://www.patreon.com/testpodcast" rel="nofollow">Help support the show with as little as $1 per month and be the first to know when new episodes come out.</a></li></ul><p><a href="https://www.patreon.com/testpodcast" rel="payment">Support Test & Code</a></p>
Categories: FLOSS Project Planets

Zero-with-Dot (Oleg Żero): Aggregations on time-series data with Pandas

Planet Python - Thu, 2021-10-07 18:00
Introduction

Working with time-series data is often a challenge on its own. It is a special kind of data, where data points depend on each other across time. When analyzing it, your productivity at gaining insights to a large extent depends on your ability to juggle with the time dimension.

Very often, time-series data are collected over long periods, especially when they come from hardware devices or represent sequences of, for example, financial transactions. Furthermore, even when no field in the dataset is a “null”, the data may still be problematic if the timestamps are not regularly spaced, shifted, missing, or in any way inconsistent.

One of the key skills that help to learn useful information from time-dependent data is to efficiently perform aggregations. Not only does it allow to greatly reduce the total volume of the data, but also helps to spot interesting facts faster.

In this article, I would like to present a few ways how Pandas, the most popular Python library for helping you with analysis, can help you perform these aggregations, and what is so special when you work with time. In addition to that, I will also put an equivalent syntax in SQL for reference.

The data

For demonstration, I use the credit card transaction dataset from Kaggle. However, for simplicity, I focus on the "Amount" column, and filter it by a single user, although the aggregations can always be extended to include more criteria. Information about time is spread across "Year", "Month", "Day", and "Time" columns, so it makes sense to represent it using a single column instead.

Since the whole dataset weighs around 2.35 GB, let’s transform the data on the fly using smaller batches.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 import pandas as pd import numpy as np from tqdm import tqdm from pathlib import Path SRC = Path("data/credit_card_transactions-ibm_v2.csv") DST = Path("data/transactions.csv") USER = 0 def load(filepath=SRC): data = pd.read_csv( filepath, iterator=True, chunksize=10000, usecols=["Year", "Month", "Day", "Time", "Amount"], ) for df in tqdm(data): yield df def process(df): _df = df.query("User == @USER") ts = _df.apply( lambda x: f"{x['Year']}{x['Month']:02d}{x['Day']:02d} {x['Time']}", axis=1, ) _df["timestmap"] = pd.to_datetime(ts) _df["amount"] = df["Amount"].str.strip("$").astype(float) return _df.get(["timestamp", "amount"]) def main(): for i, df in enumerate(load()): df = process(df) df.to_csv( DST, mode="a" if i else "w", header=not(bool(i)), index=False, ) if __name__ == "__main__": main() timestamp amount 2002-09-01 06:21:00 134.09 2002-09-01 06:42:00 38.48 2002-09-02 06:22:00 120.34 2002-09-02 17:45:00 128.95 2002-09-03 06:23:00 104.71

The “head” of this frame gives us the above table. For a single user (here USER = 0), we have almost 20k timestamps that mark transactions between 2002 and 2020 with a one-minute resolution.

Thanks to pd.to_datetime in line 31, we convert data concatenated from four columns and store it as np.datetime64 variable that describes time in a unified data type.

What is np.datetime64?

The np.datetime64 (doc) type is a numpy’ed version of pythonic datetime.datetime object. It is vectorized, therefore making it possible to perform operations over entire arrays quickly. At the same time, the object recognizes typical datetime methods (doc) that facilitate naturally manipulating the values.

On the pandas side, relevant objects are Timestamp, Timedelta, and Period (with corresponding DatetimeIndex, TimedeltaIndex, and PeriodIndex), which describe moments in time, time shifts, and time spans, respectively. Underneath, however, there are still np.datetime64s (and similar np.timedelta64s) with their handy properties.

Converting time-related values to these objects is the best starting point for any time-series analysis. It is convenient, and it is fast.

Basic resampling

The simplest form of a time-series aggregation is to feed values into evenly spaced bins using an aggregating function. It helps to adjust the resolution and the volume of data.

The following snippet shows an example of resampling to days using two functions: sum and count:

1 2 3 4 5 6 SELECT sum(amount), count(amount), DATE(timestamp) AS dt FROM transactions GROUP BY dt;

Pandas provides us at least two ways to achieve the same result:

1 2 3 4 5 # option 1 df["amount"].resample("D").agg(["sum", "count"]) # option 2 df["amount"].groupby(pd.Grouper(level=0, freq="D")).agg(["sum", "count"])

Both options are equivalent. The first one is simpler and relies on the fact that the timestamp column has been set to be the dataframe’s index, although it is also possible to use an optional argument on to point to a particular column. The second uses a more generic aggregation object pd.Grouper in combination with the .groupby method. It is highly customizable with many optional arguments. Here, I am using level as opposed to key, because timestamp is an index. Also, freq="D" stands for days. There are other codes too, although an analogous SQL statement may be more complicated.

Aggregations over several time spans

Say you want to aggregate data over multiple parts of the time stamp such as (year, week) or (month, day-of-week, hour). Due to timestamp being of np.datetime64 type, it is possible to refer to its methods using the so-called .dt accessor and use them for aggregation instructions.

In SQL, you would do:

1 2 3 4 5 SELECT AVG(amount), STRFTIME('%Y %W', timestamp) AS yearweek FROM transactions GROUP BY yearweek

Here are two ways to do it in Pandas:

1 2 3 4 5 6 7 8 9 10 11 df = df.reset_index() # if we want `timestamp` to be a column df["amount"].groupby(by=[ df["timestamp"].dt.year, df["timestamp"].dt.isocalendar().week ]).mean() df = df.set_index("timestamp") # if we want `timestamp` to be index df["amount"].groupby(by=[ df.index.year, df.index.isocalendar().week, ]).mean()

They do the same thing.

  amount (2002, 1) 40.7375 (2002, 35) 86.285 (2002, 36) 82.3733 (2002, 37) 72.2048 (2002, 38) 91.8647

It is also worth mentioning that the .groupby method does not enforce using an aggregating function. All it does is to slice the frame into a series of frames. You may just as well want to use the individual “sub-frames” and perform some transformations directly on them. If that is the case, just iterate:

1 2 for key, group in df.groupby(by=[df.index.year, df.index.isocalendar().week]): pass

Here the key will be a tuple of (year, week) and the group will be a sub-frame.

Remark

It is important to mention that the boundaries of the time windows may be defined differently in different flavors of SQL and Pandas. When using SQLite for comparison, each gave a slightly different result.

SQL:

1 2 3 4 5 SELECT STRFTIME('%Y %W %w', timestamp), timestamp FROM TRANSACTIONS LIMIT 5; timestamp year week day 2002-09-01 06:21:00 2002 34 0 2002-09-01 06:42:00 2002 34 0 2002-09-02 06:22:00 2002 35 1 2002-09-02 17:45:00 2002 35 1 2002-09-03 06:23:00 2002 35 2

Pandas:

1 df.index.isocalendar().head() timestamp year week day 2002-09-01 06:21:00 2002 35 7 2002-09-01 06:42:00 2002 35 7 2002-09-02 06:22:00 2002 36 1 2002-09-02 17:45:00 2002 36 1 2002-09-03 06:23:00 2002 36 2

The concept is the same, but the reference is different.

Window functions

The last type of aggregations that is commonly used for time data is to use a rolling window. As opposed to groupby rows by values of some specific columns, this method defines an interval of rows to pick a sub-table, shifts the window, and does it again.

Let’s see an example of calculating a moving average of five consecutive rows (the current plus four into the past). In SQL, the syntax is the following:

1 2 3 4 5 6 7 SELECT timestamp, AVG(amount) OVER ( ORDER BY timestamp ROWS BETWEEN 4 PRECEDING AND CURRENT ROW ) rolling_avg FROM transactions;

Pandas decalres a much simpler syntax:

1 2 3 4 5 6 # applying mean immediatiely df["amount"].rolling(5).mean() # accessing the chunks directly for chunk in df["amount"].rolling(5): pass

Again, in Pandas, different adjustments can be made using optional arguments. The size of the window is dictated by the window attribute, which in SQL is realized by a sequence of statements (line 5). In addition, we may want to center the window, use a different window e.g. weighted averaging, or perform optional data cleaning. However, the usage of the pd.Rolling object returned by the .rolling method is, in a sense, similar to the pd.DataFrameGroupBy objects.

Conclusions

Here, I presented three types of aggregations I frequently use when working with time-series data. Although not all data the contains information about time is a time-series, for time-series it is almost always beneficial to convert the time information into pd.Timestamp or other similar objects that implement numpys np.datetime64 objects underneath. As shown, it makes aggregating across differnent time properties very convenient, intuitive, and fun.

Categories: FLOSS Project Planets

KDE Gear 21.12 releases schedule finalized

Planet KDE - Thu, 2021-10-07 17:30

 It is available at the usual place https://community.kde.org/Schedules/KDE_Gear_21.12_Schedule

 
Dependency freeze is in four weeks (November 4) and Feature Freeze a week after that, make sure you start finishing your stuff!

Categories: FLOSS Project Planets

Ben Cook: PyTorch DataLoader Quick Start

Planet Python - Thu, 2021-10-07 17:19

PyTorch comes with powerful data loading capabilities out of the box. But with great power comes great responsibility and that makes data loading in PyTorch a fairly advanced topic.

One of the best ways to learn advanced topics is to start with the happy path. Then add complexity when you find out you need it. Let’s run through a quick start example.

What is a PyTorch DataLoader?

The PyTorch DataLoader class gives you an iterable over a Dataset. It’s useful because it can parallelize data loading and automatically shuffle and batch individual samples, all out of the box. This sets you up for a very simple training loop.

PyTorch Dataset

But to create a DataLoader, you have to start with a Dataset, the class responsible for actually reading samples into memory. When you’re implementing a DataLoader, the Dataset is where almost all of the interesting logic will go.

There are two styles of Dataset class, map-style and iterable-style. Map-style Datasets are more common and more straightforward so we’ll focus on them but you can read more about iterable-style datasets in the docs.

To create a map-style Dataset class, you need to implement two methods: __getitem__() and __len__(). The __len__() method returns the total number of samples in the dataset and the __getitem__() method takes an index and returns the sample at that index.

PyTorch Dataset objects are very flexible — they can return any kind of tensor(s) you want. But supervised training datasets should usually return an input tensor and a label. For illustration purposes, let’s create a dataset where the input tensor is a 3×3 matrix with the index along the diagonal. The label will be the index.

It should look like this:

dataset[3] # Expected result # {'x': array([[3., 0., 0.], # [0., 3., 0.], # [0., 0., 3.]]), # 'y': 3}

Remember, all we have to implement are __getitem__() and __len__():

from typing import Dict, Union import numpy as np import torch class ToyDataset(torch.utils.data.Dataset): def __init__(self, size: int): self.size = size def __len__(self) -> int: return self.size def __getitem__(self, index: int) -> Dict[str, Union[int, np.ndarray]]: return dict( x=np.eye(3) * index, y=index, )

Very simple. We can instantiate the class and start accessing individual samples:

dataset = ToyDataset(10) dataset[3] # Expected result # {'x': array([[3., 0., 0.], # [0., 3., 0.], # [0., 0., 3.]]), # 'y': 3}

If happen to be working with image data, __getitem__() may be a good place to put your TorchVision transforms.

At this point, a sample is a dict with "x" as a matrix with shape (3, 3) and "y" as a Python integer. But what we want are batches of data. "x" should be a PyTorch tensor with shape (batch_size, 3, 3) and "y" should be a tensor with shape batch_size. This is where DataLoader comes back in.

PyTorch DataLoader

To iterate through batches of samples, pass your Dataset object to a DataLoader:

torch.manual_seed(1234) loader = torch.utils.data.DataLoader( dataset, batch_size=3, shuffle=True, num_workers=2, ) for batch in loader: print(batch["x"].shape, batch["y"]) # Expected result # torch.Size([3, 3, 3]) tensor([2, 1, 3]) # torch.Size([3, 3, 3]) tensor([6, 7, 9]) # torch.Size([3, 3, 3]) tensor([5, 4, 8]) # torch.Size([1, 3, 3]) tensor([0])

Notice a few things that are happening here:

  • Both the NumPy arrays and Python integers are both getting converted to PyTorch tensors.
  • Although we’re fetching individual samples in ToyDataset, the DataLoader is automatically batching them for us, with the batch size we request. This works even though the individual samples are in dict structures. This also works if you return tuples.
  • The samples are randomly shuffled. We maintain reproducibility by setting torch.manual_seed(1234).
  • The samples are read in parallel across processes. In fact, this code will fail if you run it in a Jupyter notebook. To get it to work, you need to put it underneath a if __name__ == "__main__": check in a Python script.

There’s one other thing that I’m not doing in this sample but you should be aware of. If you need to use your tensors on a GPU (and you probably are for non-trivial PyTorch problems), then you should set pin_memory=True in the DataLoader. This will speed things up by letting the DataLoader allocate space in page-locked memory. You can read more about it here.

Summary

To review: the interesting part of custom PyTorch data loaders is the Dataset class you implement. From there, you get lots of nice features to simplify your data loop. If you need something more advanced, like custom batching logic, check out the API docs. Happy training!

The post PyTorch DataLoader Quick Start appeared first on Sparrow Computing.

Categories: FLOSS Project Planets

Droptica: How to Schedule a Publication in Drupal? Scheduler Module

Planet Drupal - Thu, 2021-10-07 12:49

When creating content for a website, it is sometimes necessary to plan its publication later down the line. However, taking care of it manually can be both time-consuming and inconvenient. This is when Scheduler comes in handy – a Drupal module that will help you automate this process. Using it will allow us, among other things, to schedule the publication of content for a specific date and time.

Scheduler module - dates

The module was released on 23 July 2006, and its latest update was pushed on 19 July 2021. Scheduler has versions for Drupal 7 and 8. What is more, the latest update is compatible with Drupal 9 as well.

Module popularity

The module is currently used on more than 85 thousand websites. About 44 thousand of them are running Drupal 7, and more than 37 thousand are on Drupal 8.

Source: Drupal.org

Module developers

Scheduler was originally published by Eric Schaefer. However, the list of people working on its development to date is very long and impossible to establish – we don’t know all the users who contributed to its development.

Drupal Scheduler module – what does it do?

As I pointed out in the introduction, the module is used to plan content publication in advance. It also offers you a way to plan unpublishing. If needed – for example, in the case of events, where news will be made obsolete after the end, you can task the module with publishing your content and schedule its removal from your website at a specific day and time.

Scheduler provides three new permissions, allowing only the selected roles to have access to scheduled publishing. The list of possibilities also includes the so-called Lightweight cron, the configuration of which optimizes resource consumption. Lightweight cron is the developers' solution to make the cron responsible for publishing and removing content available to be run separately, without the need to initiate all other tasks, as is usually the case in Drupal.

Unboxing

Installation is standard. I recommend using Composer.

composer require drupal/scheduler   Permissions

Go to

/admin/people/permissions#module-scheduler

– there, you will find a list of permissions that the module provides:

 

Administer scheduler

This setting enables you to configure the Scheduler module, available at

/admin/config/content/scheduler

(see the next section for the description of all the features).

Scheduler content publication

Granting this permission allows a role to set scheduled publication, as well as to plan unpublishing.

View scheduled content list

Scheduler provides a view, which is available at

/admin/content/scheduled

Granting this permission allows you to access this view.

Settings

Go to

/admin/config/content/scheduler

to find all the global settings for the module. What is more, Scheduler can be configured per content type. Below, you can find a break down the global options.

 

Allow users to enter only a date and provide a default time

Allows users who have permission to configure scheduled content publishing to specify only the publication date. When this option is selected, the time will be predefined and configurable in the Default time field.

 

Hide the second

Checking this option disables the ability to set seconds when scheduling content publishing.

Lightweight cron

As I pointed out earlier, by default, Drupal runs all cron jobs every once in a while. Checking which content needs to be published and unpublished relies on a cron job, which should be run every 1-5 minutes. Configuring Drupal to run all cron jobs every minute is hardly a very good idea, considering its performance, which is why the developers enabled the users to run a single cron job at a suitable interval. To do this, you need to add a new cron job run at a given time. Here is an example of a cron job that is run every minute: 

* * * * * wget -q -O /dev/null "https://tesd9.lndo.site/scheduler/cron/{access-key}

Go to

/admin/config/content/scheduler/cron

to find the lightweight cron settings. There, you can enable logging of cron job activation and completion, change access-key and run cron manually.

Content type

I’ll illustrate this option with the default content type - Article - available in Drupal default profile. Go to

/admin/structure/types/manage/{content-type-machine-name}

There, you will notice a new Scheduler tab. This is where you’ll find all the module's configuration options, which you can set for each entity.

 

Enable scheduled publishing/unpublishing for this content type

Enables or disables the ability to set scheduled publication and/or unpublishing.

Change content creation time to match the scheduled publish time

Changes the date in the creation time field to the date selected as the planned publication date.

Require scheduled publishing/unpublishing

Checking this option makes setting scheduled publication and/or unpublishing required.

Create a new revision on publishing/unpublishing

Creates a new revision during scheduled publication and/or unpublishing.

Action to be taken for publication dates in the past

This setting enables you to specify what will happen when the editor selects a publication date earlier than the current date. You can choose one of three options here:

  • Display an error message about choosing a date earlier than the current one – in this case, the content won’t be published.
  • Publish content immediately after saving.
  • Schedule your content to be published on the next cron job run.

Display scheduling options as

Changes the way Scheduler module options are displayed when creating and editing content. There are two options to choose from – Vertical tab and Separate fieldset.

Vertical tab

 

Separate fieldset

 

Expand fieldset or vertical tab

Allows you to specify whether the field provided by the Scheduler should be expanded when creating and editing content.

Show message

Checking this option displays information about planned publication and unpublishing after saving the content.

Module usage

Let's assume that our article needs to go live on 1 September 2021 at 9:30 a.m. and won't have to be unpublished.

When writing the article, choose Publish on and set it to 01.09.2021 at 9:30 a.m., and then leave Unpublish on empty. In this case, the Require scheduled unpublishing option must be disabled for the Article entity.

Now imagine that our article needs to go live on 1 September 2021 at 9:30 a.m. and has to be unpublished a week later at the same time.

Let's start with doing the same thing as we did in the previous example, but this time also set Unpublish on to 08.09.2021 at 9:30 a.m.

You may be also interested in: How to Make Content Editing Easier in Drupal - Review of the Simplify Module

Integrations

Scheduler offers integrations with several Drupal modules.

  • If you’re using the Content Moderation module, you must enable the Content Moderation Integration sub-module.
  • Scheduler provides additional conditions, actions, and events for the Rules module.
  • It is also integrated with the automatic generation of test content provided by the Devel Generate module. Scheduler can automatically add the planned publication and unpublishing dates.
  • It also creates new tokens for the Token module, containing the planned publication and unpublishing dates.
The future of the module

The developers responsible for the Scheduler have announced that they are working on releasing version 2.0 of the module, supporting entities other than nodes, for example, Media, Commerce Products, and more. They also announced that events triggered by the Scheduler module and its integration with the Rules module will from now on be triggered after an entity is saved, rather than before, as was the case until now. The development progress can be followed on the module page.

Drupal Scheduler module – summary

Scheduler is a tool that greatly facilitates the scheduling publication of content on your website. Using it allows you to automate the process and makes it possible to do all the steps required to publish content at any time – thus making sure that you won't have to worry about it when the time comes. At Droptica, we also use Scheduler to schedule publications in advance. This module is extremely popular among Drupal users, and as such, it is constantly developed – with version 2.0 in the works right now. Our team of Drupal developers recommends using the Scheduler module to schedule publications in advance or to publish content for a specific time.

Categories: FLOSS Project Planets

remotecontrol @ Savannah: Google Rolls Out Emission-Curbing Tools for Nest Thermostat

GNU Planet! - Thu, 2021-10-07 11:33

https://www.wsj.com/articles/google-rolls-out-emission-curbing-tools-for-nest-thermostat-11633503660

This offering from Google is false advertising. There is no means for an electricity customer to select the source of the electricity provided to their premises.

Categories: FLOSS Project Planets

DDNS with Hetzner DNS

Planet KDE - Thu, 2021-10-07 09:23

Some of the services I host are hosted behind dynamic DNS. There are lots of services to automatically update the dns records if the IP changes, but most of them are not free or require regular confirmation of the domain.

I wanted to have a solution that is as standard as possible, so ideally without any CNAME aliases pointing to a subdomain of a DDNS provider.

Luckily, Hetzner has a free DNS server hosting with a nice API. So what I ended up doing was regularly sending requests to the Hetzner DNS API to update the IP-address To reduce the amount of requests going to hetzner, the request is only sent when the IP really changed. The IP can be fetched from an external service like Ipify or a very simple selfhosted service if you have another server that is reachable on the internet.

The result is a simple daemon that automates all of that. My implementation is written in Rust, but it’s very small and would be easy to write using other languages too. If you want to use it, you can find it on Codeberg. The config format for it is a simple toml file that goes into /etc/hetzner-ddns/config.toml The skeleton config file looks like this:

update_interval = 30 [auth] token = "" [record] id = "" name = "" zone_id = ""

Of course the empty strings need to be replaced with real data which you can get from the Hetzner API or the web interface. The API documentation contains examples on how to query the API using curl.

If you use systemd, you can use the following unit to run the service on a separate ddns user, which you need to create before.

[Unit] Description=Hetzner DNS update After=network-online.target [Service] Type=simple Restart=on-failure ExecStart=/opt/bin/hetzner-ddns ExecReload=/bin/kill -USR1 $MAINPID User=ddns [Install] WantedBy=multi-user.target

Btw, because I know some people were not too happy with this blog being hosted on GitHub, it is now also hosted on my own server (behind DDNS). If the uptime ends up being good enough, I’ll keep it this way.

Categories: FLOSS Project Planets

Python for Beginners: Generators in Python

Planet Python - Thu, 2021-10-07 08:40

Do you know about functions in python? If you answered yes, let me take you through an interesting concept of generator functions and generators in python. In this article, we will look at how generators are defined and used in a program. We will also look at how a generator is different from a function using some examples.

What is a function?

In python, a function is a block of code that does some specific work. For example, a function can add two numbers, a function can delete a file from your computer, a function can do any specific task you want it to do. 

A function also returns the required output value using the return statement. An example of a function is given below. It takes two numbers as input, multiplies them and returns the output value using the return statement.

def multiplication(num1, num2): product = num1 * num2 return product result = multiplication(10, 15) print("Product of 10 and 15 is:", result)

Output:

Product of 10 and 15 is: 150 What is a generator function?

A generator function is similar to a function in python but it gives an iterator-like generator to the caller as output  instead of an object or a value. Also, we use yield statements instead of return statements in a generator function. The yield statement pauses the execution of the generator function whenever it is executed and returns the output value to the caller. A generator function can have one or more than one yield statements but it cannot have a return statement.

We can define a generator function in a similar way to functions in python but we cannot use a return statement. Instead we use the yield statement. Following is an example of a generator function that returns numbers from 1 to 10 to the caller.

def num_generator(): for i in range(1, 11): yield i gen = num_generator() print("Values obtained from generator function are:") for element in gen: print(element)

Output:

Values obtained from generator function are: 1 2 3 4 5 6 7 8 9 10 What are generators in Python? 

Generators in python are a type of iterators that are used to execute generator functions using the next() function. To execute a generator function, we assign it to the generator variable. Then we use the next() method to execute the generator function.

The next() function takes the generator as input and executes the generator function till the next yield statement. After that, execution of the generator function is paused. To resume the execution, we again call the next() function with the generator as an input. Again, the generator function executes till the next yield statement. This process can be continued till the generator function’s execution gets finished. This process can be understood from the following example.

def num_generator(): yield 1 yield 2 yield 3 yield 4 gen = num_generator() for i in range(4): print("Accessing element from generator.") element = next(gen) print(element)

Output:

Accessing element from generator. 1 Accessing element from generator. 2 Accessing element from generator. 3 Accessing element from generator. 4 Process finished with exit code 0

In the above output, you can observe that each time the next() function is called, the element from the next yield statement is printed. It shows that each time when the next() function is called, the generator function resumes its execution.

If we try to call the next() function with the generator as input after the generator function has finished its execution, the next() function raises StopIteration exception. So, It is advised to use the next() function inside a python try except block. Moreover, we can also iterate through the generators in python using the for loop. It will produce the same result as produced during the execution of the program using the next() function.

def num_generator(): yield 1 yield 2 yield 3 yield 4 gen = num_generator() for i in gen: print("Accessing element from generator.") print(i)

Output:

Accessing element from generator. 1 Accessing element from generator. 2 Accessing element from generator. 3 Accessing element from generator. 4 Process finished with exit code 0 Examples of generators in Python

As we have discussed generators and generator functions in Python, Let us implement a program to understand the above concepts in a better way. In the following program, we implement a generator function that takes a list as input and calculates the square of elements in the list.

myList = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] def square_generator(input_list): for element in input_list: print("Returning the square of next element:",element) yield element*element print("The input list is:",myList) gen = square_generator(myList) for i in range(10): print("Accessing square of next element from generator.") square = next(gen) print(square)

Output:

The input list is: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] Accessing square of next element from generator. Returning the square of next element: 1 1 Accessing square of next element from generator. Returning the square of next element: 2 4 Accessing square of next element from generator. Returning the square of next element: 3 9 Accessing square of next element from generator. Returning the square of next element: 4 16 Accessing square of next element from generator. Returning the square of next element: 5 25 Accessing square of next element from generator. Returning the square of next element: 6 36 Accessing square of next element from generator. Returning the square of next element: 7 49 Accessing square of next element from generator. Returning the square of next element: 8 64 Accessing square of next element from generator. Returning the square of next element: 9 81 Accessing square of next element from generator. Returning the square of next element: 10 100

In the above example, you can see that whenever the next() function is executed with the generator as input, It executes the loop one time till the yield statement. Once the yield statement is executed, the execution of the generator function is paused until we execute the next() function again.

Main difference between a function and a generator function

The main differences between a function and a generator function are as follows.

  • A function has a return statement while a generator function has a yield statement.
  • A function stops its execution after execution of the first return statement. Whereas, a generator function just pauses the execution after execution of the yield statement. 
  • A function returns a value or a container object while a generator function returns a generator object.
Conclusion

In this article, we have discussed generator functions and generators in Python.To learn more about python programming, you can read this article on list comprehension. You may also like this article on the linked list in Python.

The post Generators in Python appeared first on PythonForBeginners.com.

Categories: FLOSS Project Planets

Promet Source: Best Drupal Modules for Government Websites

Planet Drupal - Thu, 2021-10-07 06:29
Those of us who have a strong conviction that Drupal is the optimal CMS for government websites are in good company.
Categories: FLOSS Project Planets

nano @ Savannah: GNU nano 5.9 was released

GNU Planet! - Thu, 2021-10-07 06:03

Version 5.5 brought the option --minibar, for a minimized user interface, and version 5.6 brought the spotlighting of a search match, in black on yellow by default.  Subsequent versions added a few minor things and fixed some bugs.

Categories: FLOSS Project Planets

Kalendar: A New KDE ... Calendar App! And More

Planet KDE - Thu, 2021-10-07 05:41
Many thanks to Clau for working on this app and providing videos! Stay in the loop: https://t.me/veggeroblog If you want to help me make these videos: Patreon: https://www.patreon.com/niccolove Youtube: https://www.youtube.com/channel/UCONH73CdRXUjlh3-DdLGCPw/join Paypal: https://paypal.me/niccolove My website is https://niccolo.venerandi.com and if you want to contact me, my telegram handle is [at] veggero.
Categories: FLOSS Project Planets

Kentaro Hayashi: Sharing mentoring a new Debian contributor experience, lots of fun

Planet Debian - Thu, 2021-10-07 04:19

I recently did mentoring a new Debian contributor. This is carried out in a framework with OSS Gate on-boarding.

oss-gate.github.io

In "OSS Gate on-boarding", recruit a new contributor who want to work on continuously. Then, corporation sponsor its employee as a mentor. Thus, employees can do it as a one of their job.

During Aug - Oct period, I worked with a new debian contributor every 2h in a week. This experience is lots of fun, and learned a new things for me.

The most important point is: a new Debian contributor aimed to do their work continuously even though mentoring period has finished.

So, some of the work has been finished, but not for all. I tried to transfer knowledge for it.

I'm looking forward that he makes things forward in consultation with other person's help.

Here is the report about my activity as a mentor.

First OSS Gate onboarding (The article is written by Japanese)

The original blog entry is written by Japanese, I don't afford to translate it, so just paste link to google translate for your hints

I hope someone can do a similar attempt too!

For the record, I worked with a new Debian contributor about:

Categories: FLOSS Project Planets

Python Insider: Python 3.11.0a1 is available

Planet Python - Thu, 2021-10-07 04:04

Now that we are on a release spree, here you have the first alpha release of Python 3.11: Python 3.11.0a1. Let the testing and validation games begin!

https://www.python.org/downloads/release/python-3110a1/
Major new features of the 3.11 series, compared to 3.10

Among the new major new features and changes so far:

  • PEP 657 – Include Fine-Grained Error Locations in Tracebacks
  • PEP 654 – PEP 654 – Exception Groups and except*
  • (Hey, fellow core developer, if a feature you find important is missing from this list, let Pablo know.)

The next pre-release of Python 3.11 will be 3.11.0a2, currently scheduled for 2021-11-02.

More resources

And now for something completely different

Schwarzschild black holes are also unique because they have a space-like singularity at their core, which means that the singularity doesn't happen at a specific point in *space* but happens at a specific point in *time* (the future). This means once you are inside the event horizon you cannot point with your finger towards the direction the singularity is located because the singularity happens in your future: no matter where you move, you will "fall" into it.

For a Schwarzschild black hole (a black hole with no rotation or electromagnetic charge), given a free fall particle starting at the event horizon, the maximum propper time (which happens when it falls without angular velocity) it will experience to fall into the singularity is `π*M` (in natural units), where M is the mass of the black hole. For Sagittarius A* (the black hole at the centre of the milky way) this time is approximately 1 minute.

We hope you enjoy the new releases!Thanks to all of the many volunteers who help make Python Development and these releases possible! Please consider supporting our efforts by volunteering yourself or through organization contributions to the Python Software Foundation.

https://www.python.org/psf/

Your friendly release team,
Ned Deily @nad 
Steve Dower @steve.dower 
Pablo Galindo Salgado @pablogsal

Categories: FLOSS Project Planets

Python Bytes: #253 A new Python for you, and for everyone!

Planet Python - Thu, 2021-10-07 04:00
<p><strong>Watch the live stream:</strong></p> <a href='https://www.youtube.com/watch?v=mMd1TzdpfZ8' style='font-weight: bold;'>Watch on YouTube</a><br> <br> <p><strong>About the show</strong></p> <p>Special guest: Yael Mintz</p> <p>Sponsored by <strong>us:</strong></p> <ul> <li>Check out the <a href="https://training.talkpython.fm/courses/all"><strong>courses over at Talk Python</strong></a></li> <li>And <a href="https://pythontest.com/pytest-book/"><strong>Brian’s book too</strong></a>!</li> </ul> <p><strong>Michael #1:</strong> <a href="https://github.com/rajasegar/awesome-htmx"><strong>awesome-htmx</strong></a></p> <ul> <li>An awesome list of resources about <strong>htmx</strong> such as articles, posts, videos, talks and more.</li> <li>Good for all sorts of examples and multiple languages</li> <li>We get a few nice shoutouts, thanks</li> </ul> <p><strong>Brian #2:</strong> <a href="https://www.python.org/downloads/release/python-3100/"><strong>Python 3.10 is here !!!!</strong></a> </p> <ul> <li>As of Monday. Of course I have it installed on Mac and Windows. Running like a charm.</li> <li>You can watch the <a href="https://t.co/sK5SmgXpif?amp=1">Release Party recording</a>. It’s like 3 hours. And starts with hats. Pablo’s is my fav.</li> <li>Also a <a href="https://www.youtube.com/watch?v=JteTO3EE7y0&amp;t=1s">What’s New video</a> which aired before that with Brandt Bucher, Lukasz Llanga ,and Sebastian Ramirez (33 min) <ul> <li>Includes a deep dive into structural pattern matching that I highly recommend.</li> </ul></li> <li>Reminder of new features: <ul> <li><a href="https://www.python.org/dev/peps/pep-0623/">PEP 623</a> -- Deprecate and prepare for the removal of the wstr member in PyUnicodeObject.</li> <li>PEP 604 -- Allow writing union types as X | Y</li> <li><a href="https://www.python.org/dev/peps/pep-0612/">PEP 612</a> -- Parameter Specification Variables</li> <li><a href="https://www.python.org/dev/peps/pep-0626/">PEP 626</a> -- Precise line numbers for debugging and other tools.</li> <li><a href="https://www.python.org/dev/peps/pep-0618/">PEP 618</a> -- Add Optional Length-Checking To zip.</li> <li><a href="https://bugs.python.org/issue12782">bpo-12782</a>: Parenthesized context managers are now officially allowed.</li> <li><a href="https://www.python.org/dev/peps/pep-0632/">PEP 632</a> -- Deprecate distutils module.</li> <li><a href="https://www.python.org/dev/peps/pep-0613/">PEP 613</a> -- Explicit Type Aliases</li> <li><a href="https://www.python.org/dev/peps/pep-0634/">PEP 634</a> -- Structural Pattern Matching: Specification</li> <li><a href="https://www.python.org/dev/peps/pep-0635/">PEP 635</a> -- Structural Pattern Matching: Motivation and Rationale</li> <li><a href="https://www.python.org/dev/peps/pep-0636/">PEP 636</a> -- Structural Pattern Matching: Tutorial</li> <li><a href="https://www.python.org/dev/peps/pep-0644/">PEP 644</a> -- Require OpenSSL 1.1.1 or newer</li> <li><a href="https://www.python.org/dev/peps/pep-0624/">PEP 624</a> -- Remove Py_UNICODE encoder APIs</li> <li><a href="https://www.python.org/dev/peps/pep-0597/">PEP 597</a> -- Add optional EncodingWarning</li> </ul></li> <li>Takeaway I wasn’t expecting: <code>black</code> doesn’t handle Structural Pattern Matching yet. </li> </ul> <p><strong>Yael #3:</strong> <a href="https://github.com/PyCQA/prospector"><strong>Prospector</strong></a> <a href="https://github.com/PyCQA/prospectors">(almost)</a> <a href="https://github.com/PyCQA/prospector">All Python analysis tools together</a></p> <ul> <li>Instead of running pylint, pycodestyle, mccabe and other separately, prospector allows you to bundle them all together </li> <li>Includes the common <a href="https://www.pylint.org/">Pylint</a> and <a href="https://github.com/PyCQA/pydocstyle">Pydocstyle / Pep257</a>, but also some other less common goodies, such as <a href="https://github.com/PyCQA/mccabe">Mccabe</a>, <a href="https://github.com/landscapeio/dodgy">Dodgy</a>, <a href="https://github.com/jendrikseipp/vulture">Vulture</a>, <a href="https://github.com/PyCQA/bandit">Bandit</a>, <a href="https://github.com/regebro/pyroma">Pyroma</a> and many others </li> <li>Relatively easy configuration that supports profiles, for different cases</li> <li>Built-in support for celery, Flask and Django frameworks</li> <li><a href="https://soshace.com/how-to-use-prospector-for-python-static-code-analysis/">https://soshace.com/how-to-use-prospector-for-python-static-code-analysis/</a></li> </ul> <p><strong>Michael #4:</strong> <a href="https://twitter.com/__aviperl__/status/1442542251817652228"><strong>Rich Pandas DataFrames</strong></a></p> <ul> <li>via Avi Perl, by Khuyen Tran</li> <li>Create animated and pretty Pandas Dataframe or Pandas Series (in the terminal, using Rich)</li> <li>I just had Will over on Talk Python last week BTW: <a href="https://talkpython.fm/episodes/show/336/terminal-magic-with-rich-and-textual"><strong>Terminal magic with Rich and Textual</strong></a></li> <li>Can limit rows, control the animation speed, show head or tail, go “full screen” with clear, etc.</li> <li>Example:</li> </ul> <pre><code> from sklearn.datasets import fetch_openml from rich_dataframe import prettify speed_dating = fetch_openml(name='SpeedDating', version=1)['frame'] table = prettify(speed_dating) </code></pre> <p><strong>Brian #5:</strong> <strong>Union types, baby!</strong></p> <ul> <li>From Python 3.10: “<a href="https://www.python.org/dev/peps/pep-0604/">PEP 604</a> -- Allow writing union types as X | Y”</li> <li>Use as possibly not intended, to avoid Optional:</li> </ul> <pre><code> def foo(x: str | None = None) -&gt; None: pass </code></pre> <ul> <li>3.9 example:</li> </ul> <pre><code> from typing import Optional def foo(x: Optional[str] = None) -&gt; None: pass </code></pre> <ul> <li>But here’s the issue. I need to support Python 3.9 at least, and probably early, what should I do?</li> <li>For 3.7 and above, you can use <code>from __future__ import annotations</code>.</li> <li>And of course Anthony Sottile worked this into <code>pyupgrade</code> and Adam Johnson wrote about it: <ul> <li><a href="https://adamj.eu/tech/2021/05/21/python-type-hints-how-to-upgrade-syntax-with-pyupgrade/">Python Type Hints - How to Upgrade Syntax with pyupgrade</a></li> </ul></li> <li>This article covers: <ul> <li><a href="https://www.python.org/dev/peps/pep-0585/">PEP 585</a> added generic syntax to builtin types. This allows us to write e.g. <code>list[int]</code> instead of using <code>typing.List[int]</code>.</li> <li><a href="https://www.python.org/dev/peps/pep-0604/">PEP 604</a> added the <code>|</code> operator as union syntax. This allows us to write e.g. <code>int | str</code> instead of <code>typing.Union[int, str]</code>, and <code>int | None</code> instead of <code>typing.Optional[int]</code>.</li> <li>How to use these. What they look like. And how to use <code>pyupgrade</code> to just convert your code for you if you’ve already written it the old way. Awesome.</li> </ul></li> </ul> <p><strong>Yael #6:</strong> <a href="https://dev.to/akaihola/improving-python-code-incrementally-3f7a"><strong>Make your code darker - Improving Python code incrementally</strong></a></p> <ul> <li>The idea behind <a href="https://pypi.org/project/darker">Darker</a> is to reformat code using <a href="https://pypi.org/project/black">Black</a> (and optionally <a href="https://pypi.org/project/isort">isort</a>), but only apply new formatting to regions which have been modified by the developer</li> <li>Instead of having one huge PR, darker allows you to reformat the code gradually, when you're touching the code for other reasons.. </li> <li>Every modified line, will be black formatted</li> <li>Once added to <a href="https://github.com/akaihola/darker#using-as-a-pre-commit-hook">Git pre-commit-hook</a>, or added to <a href="https://github.com/akaihola/darker#pycharmintellij-idea">PyCharm</a> <a href="https://github.com/akaihola/darker#pycharmintellij-idea"><em>*</em>*</a>/ <a href="https://github.com/akaihola/darker#visual-studio-code">VScode</a> the formatting will happen automatically</li> </ul> <p><strong>Extras</strong></p> <p>Brian:</p> <ul> <li>I got a couple PRs accepted into pytest. So that’s fun: <ul> <li><a href="https://github.com/pytest-dev/pytest/pull/9133">9133: Add a deselected parameter to assert_outcomes()</a></li> <li><a href="https://github.com/pytest-dev/pytest/pull/9134">9134: Add a pythonpath setting to allow paths to be added to sys.path</a></li> <li>I’ve tested, provided feedback, written about, and submitted issues to the project before. I’ve even contributed some test code. But these are the first source code contributions.</li> <li>It was a cool experience. Great team there at pytest.</li> </ul></li> </ul> <p>Michael:</p> <ul> <li>New htmx course: <a href="https://training.talkpython.fm/courses/htmx-flask-modern-python-web-apps-hold-the-javascript?utm_source=pythonbytes"><strong>HTMX + Flask: Modern Python Web Apps, Hold the JavaScript</strong></a></li> <li><a href="https://pypi.org/project/auto-optional/"><strong>auto-optional</strong></a>: Due to the comments on the show I remembered to add support for <code>Union[X, None]</code> and python 10’s <code>X | None</code> syntax.</li> <li><a href="https://nedbatchelder.com/blog/202110/coverage_60.html">Coverage 6.0 released</a></li> <li><a href="https://docs.djangoproject.com/en/3.2/releases/3.2.8/">Django 3.2.8 released</a></li> </ul> <p>Yael:</p> <ul> <li><a href="https://www.manning.com/books/data-oriented-programming">data-oriented-programming</a> - an innovative approach to coding without OOP, with an emphasis on code and data separation, which simplifies state management and eases concurrency</li> <li>Help us to make <a href="https://github.com/hiredscorelabs/cornell">Cornell</a> awesome 🙂 - contributors are warmly welcomed</li> </ul> <p><strong>Joke:</strong> <a href="https://geek-and-poke.com/geekandpoke/2021/1/24/pair-captchaing"><strong>Pair CAPTCHAing</strong></a></p>
Categories: FLOSS Project Planets

tanay.co.in: Globant Tech Insiders - Drupal Content Staging and Deployment - Best Practices Deliberated!

Planet Drupal - Thu, 2021-10-07 01:36
Globant Tech Insiders - Drupal Content Staging and Deployment - Best Practices Deliberated!

 At Globant, on my team, we have been exploring multiple content staging and deployment strategies. The observations from some of which can be found in my earlier posts here and here.

Next Tuesday, Makbul and Rahul from my team are presenting a consolidation of our findings, and a refined strategy that we use in our enterprise projects to create, stage and deploy content across environments using a robust process that increases our productivity significantly.

Sign up for the webinar @ https://bit.ly/3DdBAKD

tanay Thu, 10/07/2021 - 00:36
Categories: FLOSS Project Planets

Pages