Feeds

The Drop Times: 10 Best E-commerce Modules For Drupal 9 [Most Installed]

Planet Drupal - Tue, 2022-06-14 06:57
Drupal e-commerce modules are useful for managing various aspects of your site, that's why you should know which modules are worth getting. Here are the most installed e-commerce modules.
Categories: FLOSS Project Planets

PyCharm

Planet Python - Tue, 2022-06-14 06:39

I’ve been a long-time Pandas user, relying on it heavily since the start of my data science career. However, up until the last couple of years, I struggled with certain issues, such as not being able to work with very large DataFrames or efficiently run heavy data processing tasks. I’d also often find my Jupyter notebooks cluttered with intermediate DataFrames after applying transformations, making it harder to read the code and keep my notebook tidy. For a long time, I had thought that these issues were just endemic to Pandas and accepted there wasn’t a better way; however, there is!

If you find yourself dealing with similar issues, join Matt and me as we discuss tips for making Pandas more memory-friendly, getting the best performance possible when applying operations to Series and DataFrames, and keeping your Pandas code as reproducible and tidy as possible. We’ll also be talking about using Pandas as a powerful basis for data visualization, and how using the right tooling can make working with Pandas easier.

Attendees of this webinar will receive a discount code for Matt’s book, Effective Pandas, which includes even more tips on how to get the most out of Pandas. In addition, join us on Twitter where we’re running a competition this week asking for your best Pandas tips. Five tips will be selected during the webinar and the winners will receive a free hard copy of the book!

When
  • Tuesday, June 21
  • 5:00 pm (UTC)

Register Now!

Speaking to You

Matt Harrison runs MetaSnake, a Python and Data Science consultancy and corporate training shop. In the past, he has worked across the domains of search, build management and testing, business intelligence, and storage. He has presented and taught tutorials at conferences such as Strata, SciPy, SCALE, PyCON, and OSCON, as well as local user conferences. He has written a number of books on Python and Data Science, including Machine Learning Pocket Reference, Pandas 1.x Cookbook, Effective PyCharm, and Effective Pandas. He blogs at hairysun.com.

Dr. Jodie Burchell is a Developer Advocate for Data Science at JetBrains and was previously the Lead Data Scientist in audiences generation at Verve Group Europe. After finishing a PhD in psychology and a postdoc in biostatistics, she has worked in a range of data science and machine learning roles across search improvement, recommendation systems, natural language processing, and programmatic advertising. She is also the author of two books, The Hitchhiker’s Guide to Ggplot2 and The Hitchhiker’s Guide to Plotnine, and blogs at t-redactyl.io.

Categories: FLOSS Project Planets

ListenData: Region Proposal Network (RPN) : A Complete Guide

Planet Python - Tue, 2022-06-14 06:14
This tutorial includes detailed step by step guide of how Region Proposal Network (RPN) works. It is mainly used in RCNN family for object detection. Those who are familiar with RCNN, they might already have encountered the term Region Proposal Network. Before we get into details of RPN, let's understand how object detection works using R-CNN. Object Detection using R-CNN

R-CNN is a Region based Convolutional Neural Network. It is a state of art architecture for object detection. Let's say you have a photograph, the goal of objective detection is to detect cars and people in the photograph. There are a couple of cars and people in the photograph so we need to detect all of them.

How does it work? 1. Region Proposal

Extract many regions from an image using Selective Search. 2000 regions were used in the original whitepaper. Regions are drawing multiple bounding boxes in the input image. See the yellow bordered boxes in the image below.

2. Calculate CNN Features

Create feature vector for each region proposed using CNN network trained for classifying image

3. Classify Regions

Classify each region using linear Support Vector Machine model for each category by passing feature vector generated in the previous step.

Region Proposal Network : What does it do?

In R-CNN we have learnt extracting many regions using Selective Search. The problem with the Selective Search is that it's an offline algorithm and computationally expensive. Here Region proposal network comes into picture. In Faster R-CNN, Region Proposal Network were introduced to use a small network to predict region proposals. RPN has classifier that returns the probability of the region. It also has regressor that returns the coordinates of the bounding boxes.

Following are the steps involved in Region Proposal Network

READ MORE »
Categories: FLOSS Project Planets

Andre Roberge: Friendly IDLE

Planet Python - Tue, 2022-06-14 04:10

friendly_idle is now available.  This is just a quick announcement. Eventually I plan to write a longer blog post explaining how I use import hooks to patch IDLE and to provide seamless support for friend/friendly_traceback.  Before I incorporated "partial" support for IDLE within friendly, I had released a package named friendly_idle ... but this is really a much better version.


When you launch it from a terminal, the only clue you get that this is not your regular IDLE is from the window title.


Since Python 3.10 (and backported to Python 3.8.10 and 3.9.5), IDLE provide support for sys.excepthook() (see announcement).  Actually, in the announcement, it is not pointed out that this is only partial support: exceptions raised because of syntax errors cannot be captured by user-defined exception hooks.  However, fear not, friendly_idle is perfectly capable to help you when your code has some syntax errors.


And, of course, it can also do so for runtime errors.

The same is true for code run from a file as well:


If the code in a file contains some syntax error, friendly_idle is often much more helpful than IDLE. Here's an example from IDLE:And the same example run using friendly_idle
Unfortunately, the tkinter errorbox does not use a monospace font (assumed by friendly/friendly_traceback for the formatting), and does not allow customization.  I might have to figure out how to create my own dialog, hopefully with support for monospace font and colour highlighting. If anyone has some experience doing this, feel free to contact me! ;-)






Categories: FLOSS Project Planets

Python Bytes: #288 Performance benchmarks for Python 3.11 are amazing

Planet Python - Tue, 2022-06-14 04:00
<p><strong>Watch the live stream:</strong></p> <a href='https://www.youtube.com/watch?v=2ZTEEy1_Gkk' style='font-weight: bold;'>Watch on YouTube</a><br> <br> <p><strong>About the show</strong></p> <p>Sponsored by us! Support our work through:</p> <ul> <li>Our <a href="https://training.talkpython.fm/"><strong>courses at Talk Python Training</strong></a></li> <li><a href="https://testandcode.com/"><strong>Test &amp; Code</strong></a> Podcast</li> <li><a href="https://www.patreon.com/pythonbytes"><strong>Patreon Supporters</strong></a></li> </ul> <p><strong>Brian #1:</strong> <a href="https://www.pola.rs/"><strong>Polars: Lightning-fast DataFrame library for Rust and Python</strong></a></p> <ul> <li>Suggested by a several listeners</li> <li>“Polars is a blazingly fast DataFrames library implemented in Rust using <a href="https://arrow.apache.org/docs/format/Columnar.html">Apache Arrow Columnar Format</a> as memory model. <ul> <li>Lazy | eager execution</li> <li>Multi-threaded</li> <li>SIMD (Single Instruction/Multiple Data)</li> <li>Query optimization</li> <li>Powerful expression API</li> <li>Rust | Python | ...”</li> </ul></li> <li>Python API syntax set up to allow parallel and execution while sidestepping GIL issues, for both lazy and eager use cases. From the docs: <a href="https://pola-rs.github.io/polars-book/user-guide/dsl/groupby.html#do-not-kill-the-parallelization">Do not kill parallelization</a></li> <li><p>The syntax is very functional and pipeline-esque:</p> <pre><code>import polars as pl q = ( pl.scan_csv("iris.csv") .filter(pl.col("sepal_length") &gt; 5) .groupby("species") .agg(pl.all().sum()) ) df = q.collect() </code></pre></li> <li><p><a href="https://pola-rs.github.io/polars-book/user-guide/">Polars User Guide</a> is excellent and looks like it’s entirely written with Python examples.</p></li> <li>Includes <a href="https://pola-rs.github.io/polars-book/user-guide/dsl/video_intro.html">a 30 min intro video from PyData Global 2021</a></li> </ul> <p><strong>Michael #2:</strong> <a href="https://lp.jetbrains.com/python-developers-survey-2021/"><strong>PSF Survey is out</strong></a></p> <ul> <li>Have a look, their page summarizes it better than my bullet points will.</li> </ul> <p><strong>Brian #3:</strong> <a href="https://github.com/google/gin-config"><strong>Gin Config: a lightweight configuration framework for Python</strong></a></p> <ul> <li>Found through Vincent D. Warmerdam’s excellent <a href="https://calmcode.io/gin/intro-to-gin.html">intro videos on gin on calmcode.io</a></li> <li>Quickly make parts of your code configurable through a configuration file with the <code>@gin.configurable</code> decorator.</li> <li><p>It’s in interesting take on config files. (Example from Vincent)</p> <pre><code> # simulate.py @gin.configurable def simulate(n_samples): ... # config.py simulate.n_samples = 100 </code></pre></li> <li><p>You can specify:</p> <ul> <li>required settings: <code>def</code> <code>simulate</code>(n_samples=gin.REQUIRED)`</li> <li>blacklisted settings: <code>@gin.configurable(blacklist=["n_samples"])</code></li> <li>external configurations (specify values to functions your code is calling)</li> <li>can also references to other functions: <code>dnn.activation_fn = @tf.nn.tanh</code></li> </ul></li> <li>Documentation suggests that it is especially useful for machine learning.</li> <li>From motivation section: <ul> <li>“Modern ML experiments require configuring a dizzying array of hyperparameters, ranging from small details like learning rates or thresholds all the way to parameters affecting the model architecture.</li> <li>Many choices for representing such configuration (proto buffers, tf.HParams, ParameterContainer, ConfigDict) require that model and experiment parameters are duplicated: at least once in the code where they are defined and used, and again when declaring the set of configurable hyperparameters.</li> <li>Gin provides a lightweight dependency injection driven approach to configuring experiments in a reliable and transparent fashion. It allows functions or classes to be annotated as <code>@gin.configurable</code>, which enables setting their parameters via a simple config file using a clear and powerful syntax. This approach reduces configuration maintenance, while making experiment configuration transparent and easily repeatable.”</li> </ul></li> </ul> <p><strong>Michael #4:</strong> <a href="https://twitter.com/EduardoOrochena/status/1534913062356099079"><strong>Performance benchmarks for Python 3.11 are amazing</strong></a></p> <ul> <li>via Eduardo Orochena</li> <li>Performance may be the biggest feature of all</li> <li>Python 3.11 has <ul> <li>task groups in asyncio</li> <li>fine-grained error locations in tracebacks</li> <li>the self-type to return an instance of their class</li> </ul></li> <li>The "Faster CPython Project" to speed-up the reference implementation. <ul> <li>See my interview with Guido and Mark: <a href="https://talkpython.fm/339"><strong>talkpython.fm/339</strong></a></li> <li>Python 3.11 is 10~60% faster than Python 3.10 according to the official figures</li> <li>And a 1.22x speed-up with their standard benchmark suite.</li> </ul></li> <li>Arriving as stable until October</li> </ul> <p><strong>Extras</strong> </p> <p>Michael:</p> <ul> <li><a href="https://www.python.org/downloads/release/python-3105/"><strong>Python 3.10.5 is available</strong></a> (<a href="https://docs.python.org/release/3.10.5/whatsnew/changelog.html#python-3-10-5-final">changelog</a>)</li> <li><a href="https://www.raycast.com"><strong>Raycast</strong></a> (vs Spotlight) <ul> <li>e.g. CMD+Space =&gt; pypi search: <img src="https://paper-attachments.dropbox.com/s_89E5FC1F5DBBED983E2558E0DE9902BF8E7733F0236C1030506F258E61A5C2AF_1655219401413_pypisearch.png" alt="" /></li> </ul></li> </ul> <p><strong>Joke:</strong> <a href="https://devhumor.com/media/why-wouldn-t-you-choose-a-parrot-for-your-next-application"><strong>Why wouldn't you choose a parrot for your next application</strong></a></p>
Categories: FLOSS Project Planets

Kalendar Contact Book - Current Progress

Planet KDE - Tue, 2022-06-14 03:00

During my long train trip to the Linux App Summit 2022, I started working on a contact book feature in Kalendar. There was already a small contact integration in the event editor to select attendees for an event and I wanted to extend it with a simple contact info viewer and editor.

When I started it, I was full of hope that this would be a simple task and would be easy to finish. Unfortunately more than one month later, it’s not finished but there is a lot of progress that I can already show off.

The Contact View

The contact view is the most immediate visual change that users will notice when starting Kalendar. It’s a new view available in the sidebar and it will display all your contacts synchornized with Kalendar. It’s also feature a search field, to easily find a contact very helpful when you have many hundreds contats.

The contact view showing a few contacts

Currently not all the properties that an vcard can contains are displayed, but it is easy to add more of them later on.

Internally, the contact view uses an Akonadi::ItemMonitor so that the changes to the contact are immediately reflected in the view, even if the changes happened in KAddressBook or were synced in the background from an online service.

Contact Book Settings

Kalendar has access to the same sources as KAddressBook with for example WebDav (e.g. Nextcloud), Etesync, Microsoft Exchange Server and local vCard files.

The Google contact provider is still broken due to a massive API change in Google API. It’s a good reminder that open standards are better for the users and the developers sanity.

Contact book source settings

QR Code Sharing

From the contact view, it is also possible generate a QR code. This makes it easy to share one contact to your phone. If you want to shares and synchronize multiples contacts, it’s better to use a CardDav-based solution like Nextcloud.

QR code sharing

Plasma Applet

After implementing the contact view, with Claudio we decided to try to keep the codebase for the calendar and contact support mostly seperated from each others. To to so we created a QML plugin that contains all the contact utility and that can simply be imported with import org.kde.kalendar.contact 1.0.

This code seperation helped us develop a Plasma applet integrated inside the system try for the contact book.

The applet provides an easy way to search for a contact and send them an email or start a call using KDE Connect.

Searching in the Plasma applet

An contact book Plasma applet

It’s also possible to share with a QR code directly from the Plasma applet.

Contact Editor

The contact editor turned out more complicated than planned and is still missing a lot of features.

The contact editor

Currently, it only allows to edit the name, the phone numbers and emails of a contact. When editing the name you also have the choice to set each components of the name separately.

Advanced name options

There is also handling for the case there the contact was edited in another Akonadi-powered editor (like KAddressBook), asking the user what to do when detecting multiple concurrent editing of the same contact.

Change detection

Contact Group

Kalendar also has support for contact groups. This allows to create a group of contacts with an associated email address. It’s quite helpful when you want to often send mails to a group of contact.

Contact Group

You can also edit them, add more contacts and concurrent editing detection is also built-in the contact group editor.

Contact Group editing

Future

There is still a lot of features missings left to implement. For example, contact deletion, moving/copying a contact to another contact book but also a lot of contact properties need to be implemented in the contact book.

These features are relatively straigh forward to implement now that the base is here and if you want to help with the implementation, join our Matrix room. We would be happy to guide you.

Hopefully this will all be ready before the 22.08 release.

The Kalendar team is also working on another big component for Kalendar, stay tunned.

Categories: FLOSS Project Planets

GSoC Post 0: Introduction

Planet KDE - Tue, 2022-06-14 00:08

Hello, reader! I am Suhaas Joshi, a 20-year-old 3rd-year student at CHRIST University, India. I have been selected to GSoC 2022 as a mentee in KDE. This blog will track my KDE development work during, and after, GSoC coding period.

About the Project:

Flatpak and Snap applications run inside sandboxes, isolated from the rest of the system, and do not have access to many critical resources. As a result, they often cannot do much, and must seek permission to access the resources they require. Flatpaks access these permissions through “portals”, and Snaps do it via “interfaces”. At installation time, these permissions are usually granted by default. Presently, KDE lacks a home-grown mechanism to edit these permissions.

My SoK 2022 Project had two parts: the first was to display these permissions on Discover’s interface, and the second to develop a mechanism (through a KCM module, it was decided) to let users change these permissions. The first segment of the project was accomplished, but the second was leftover.

This GSoC project involves creating the KCM module for Flatpak, and also showing Snap permissions on Discover’s interface in the same way as Flatpaks, as well as creating KCM modules for Snaps.

So far, I have created the repository for Flatpak KCM here.

About Me:

I am a 20-year-old from India, currently pursuing a BTech at Computer Science and Engineering. I have been programming for about 2 years, chiefly in C++, C and Python. I use Fedora KDE as my daily driver, and have been a KDE user for a year. In my free time, I read books on history and politics, and have been getting into Chess lately.

Feel free to contact me if you have any suggestions!

Categories: FLOSS Project Planets

John Goerzen: Really Enjoyed Jason Scott’s BBS Documentary

Planet Debian - Mon, 2022-06-13 20:13

Like many young programmers of my age, before I could use the Internet, there were BBSs. I eventually ran one, though in my small town there were few callers.

Some time back, I downloaded a copy of Jason Scott’s BBS Documentary. You might know Jason Scott from textfiles.com and his work at the Internet Archive.

The documentary was released in 2005 and spans 8 episodes on 3 DVDs. I’d watched parts of it before, but recently watched the whole series.

It’s really well done, and it’s not just about the technology. Yes, that figures in, but it’s about the people. At times, it was nostalgic to see people talking about things I clearly remembered. Often, I saw long-forgotten pioneers interviewed. And sometimes, such as with the ANSI art scene, I learned a lot about something I was aware of but never really got into back then.

BBSs and the ARPANet (predecessor to the Internet) grew up alongside each other. One was funded by governments and universities; the other, by hobbyists working with inexpensive equipment, sometimes of their own design.

You can download the DVD images (with tons of extras) or watch just the episodes on Youtube following the links on the author’s website.

The thing about BBSs is that they never actually died. Now I’m looking forward to watching the Back to the BBS documentary series about modern BBSs as well.

Categories: FLOSS Project Planets

Plasma 5.25

Planet KDE - Mon, 2022-06-13 20:00
A host of new features and cool fresh concepts in Plasma 5.25 give you a peek into the future of KDE’s desktop. Highlights Gestures

Gestures on touchpads and touchscreens put Plasma at your fingertips

Colors

Bored of grey? Plasma puts a literal rainbow of possibilities at your disposal

Tailor-made

Customizing your desktop has never been easier... or more fun!

Navigate Workspaces

KDE Plasma 5.25 redesigns and enhances how you navigate between windows and workspaces.

Overview

The Overview effect shows all of your open windows and virtual desktops.

You can search for apps, documents, and browser tabs with KRunner and the Application Launcher.

You can add, remove, and rename virtual desktops.

Hold down the Meta (“Windows”) key and press W to enter Overview mode or use a four-finger pinch on your trackpad. Gestures

On your touchpad:

A four-finger pinch opens the Overview. A three-finger swipe in any direction will switch between Virtual Desktops. A downwards four-finger swipe opens Present Windows. A four-finger upwards swipe activates the Desktop Grid.

On your touchscreen:

You can configure swipes from the screen edge to open Overview, Desktop Grid, Present Windows, and Show Desktop as they directly follow your finger.

Open System Settings and pick the open Workspace Behavior tab, and then Touch Screen. Click on any of the squares shown on the sides of the monitor icon and a dropdown will open. Select Overview, Desktop Grid, Present Windows, or Desktop Grid and click Apply. Now you can slide your finger from the edge of the screen you selected towards the middle of the screen and watch the magic happen.

Colors

Sync the accent color with your wallpaper! The dominant color of your background picture can be applied to all components that use the accent color.

Open System Settings and choose the Appearance tab, then Colors. Select From current wallpaper and click Apply. It is that easy.

With slideshow wallpapers, colors update when the wallpaper changes.

Tint all colors of any color scheme using the accent color and adapt the color of elements of every window to the background. You can also choose how much tint you’d like to see mixed in with your normal color scheme.

Open System Settings and then click on the Appearance tab. Choose Colors and click on the Edit icon (the little pencil button) in the lower-right corner of a color scheme preview and a configuration dialogue will open. In the Options tab, tick the Tint all colors with the accent colors box and slide Tint strength: to the desired value. Click Save as and give your color scheme a new name. Click Close. Select your newly created color scheme and click Apply.

While configuring your color scheme, you can also make the header area or titlebar use the accent color.

Touch and Feel

Activate Touch Mode by detaching the screen, rotating it 360°, or enabling it manually.

If your laptop supports detaching or rotating back the keyboard, do that now. Touch Mode will activate. If not, you can manually enable Touch Mode by opening System Settings, clicking on the Workspace Behavior tab, and selecting the Touch Mode: Always Enabled radio button at the end of the page.

The Task Manager and the System Tray become bigger when in Touch Mode making it easier on your fingers. You can customize the size of the icons when Touch Mode is disabled, too.

To manually increase icon spacing in the Task Manager, right-click the Task Manager and select Configure Icons-Only Task Manager. Select Large in the Spacing between icons: option. To manually increase the icon spacing in the System Tray, right-click on the System Tray and select Configure System Tray. Select Large in the Panel icon spacing: option.

Titlebars of KDE apps become taller when in Touch Mode, making it easier to press, drag, and close windows with touch. Context menu items become taller when in Touch Mode, giving you more space to tap the correct one.

Customization

Floating Panels add a margin all around the panel to make it float, while animating it back to look normal when a window is maximized.

Right-click the panel, select Enter Edit Mode, and then More Options. Select Floating.

Blend Effects gracefully animate the transition when color schemes are changed.

Move your entire desktop, with folders, widgets and panels, from one monitor to another with the Containment Management window.

Right-click on the desktop and select Enter Edit Mode. Choose Manage Desktop and Panels from the top toolbar and drag and drop desktops or panels from one display to another, or click on their hamburger menu.

Other Updates
  • The Global Theme settings page lets you pick and choose which parts to apply so you can apply only the parts of a global theme that you like best.

  • The Application page for Discover has been redesigned and gives you links to the application’s documentation and website, and shows what system resources it has access to.

  • If you get your password wrong, the lock and login screens shake, giving you a visual cue to try again.

  • The KWin Scripts settings page has been rewritten making it easier to manage your window manager scripts.

  • Plasma panels can now be navigated with the keyboard, and you can assign custom shortcuts to focus individual panels.

Hold down the Meta (“Windows”) and Alt keys, then press P to cycle focus between all your panels and navigate between their widgets with the arrow keys. You can also right-click a panel and select Enter Edit Mode. Then choose More Options so you can set a custom shortcut to focus that particular panel.

… And there’s much more going on. If you would like to see the full list of changes, check out the changelog for Plasma 5.25.

Categories: FLOSS Project Planets

Kate + Building in Docker

Planet KDE - Mon, 2022-06-13 18:00

Have I said nice things about Kate recently? Not enough, so let me gush a little about Kate as an “IDE” and using it, with the Build Plugin, as a tool for editing locally and building remotely.

I work on a codebase that has very specific platform requirements. These requirements are difficult to reproduce in a normal host – or, if you have some modern rolling distro like openSUSE, well-nigh-impossible. That’s the situation where Docker shows up, since a Docker container can be whatever specific platform is needed.

So I have a Docker, with the special compilation environment over here, and the host machine, running a recent version of Kate over there. How can I make them work together?

Sharing Directories with Docker

When starting a Docker container with docker run, you can bind-mount locations into the container. So for a builder-container, where you want to have the source code available inside, a convenient way to do it is to bind-mount the source directory in the host, to a location – the same location – in the container.

Something like this:

docker run \ -v /home/me/src/myproject:/home/me/src/myproject:rw \ --name myproject \ myprojectimage \ /bin/sh

The --name argument is slightly-important. Docker comes up with a creative random name of the container if you don’t give one explicitly, and that makes it harder to connect to the running container.

I should note that my approach is “leave the container running, and connect to it for compilation”, not “start a new container for every build”. Your mileage may vary, and it’s easy enough to bung some commands to start a container and run make in it, in a shell script and use that.

Configuring the Build-in-Docker

The codebase I’m on uses CMake as meta-buildsystem, which leads to the usual (?) convention: there’s a build/ dir underneath the sources, CMake is run there, and then the build proceeds there as well:

# Do this in the container mkdir /home/me/src/myproject/build cd /home/me/src/myproject/build cmake ..

When doing a build like this, Qt’s moc will generate files in (subdirectories-of) the build/ directory; other sources might be generated there as well. Compiler errors might reference absolute paths (e.g. /home/me/src/myproject/main.cc) or might reference relative paths (e.g. ../main.cc).

Depending on the exact way Docker is run from the host, the build directory may or may not be visible. For me, it isn’t, and for various reasons I can’t create the same build/ path in the host. So instead, I create a similar path that leads to the same relative paths back to the source directory:

# Do this in the host mkdir /home/me/src/myproject/build_ Setting up Kate-in-Host

This is the main course: using Kate as an IDE, running the build inside the Docker container.

First, we need to enable the Build Plugin. Kate comes with a bunch of plugins that provide extra search functionality, IDE functionality, better LaTeX support, .. it’s quite extensive. Go to SettingsConfigure Kate.. and choose the Plugins pane (module? I don’t really know what those icons-in-a-column should be called, each of which calls up its own set of tabs). Tick the box in front of the Build Plugin.

Kate Plugin List, with *Build Plugin* checked

Unfortunately, the Build Plugin doesn’t initialize quite right when enabled the first time. Quit Kate, then start it again.

Once it’s back, there is a Build button at the bottom of the window. Click on it to open up the tool view for the plugin. There are target sets, and there’s a top-level “name of the target set” and “working directory” – since this is laid out in a two-column table it looks a bit strange.

Kate Plugin List, with *Build Plugin* checked

The Kate Documentation for the Build Plugin is reasonably extensive.

  • Optionally, click on the T: Target Set label to change the name of the set.
  • Click on the Dir: label to change the directory where the build happens. This is needed because otherwise the build happens either where Kate was started, or the directory where the current file is. I haven’t got a clear answer on that.
  • Tick the box in front of build to make it the default target to build.
  • Click on the make command to edit it.
  • Fill in the command to run the build in the container. I use Docker’s run command, with the name of the container to attach to. -a is a flag to preserve stdin, out, and error. The make command changes directory to the actual build-directory, and -j is tuned to the available CPU power. docker run -a myproject \ make -C /home/me/src/myproject/build -j4

There’s a little cog-like icon with tooltip Build selected target. Click on the build line to select it, then hit the cog and see what happens.

Personally I like the KDevelop key-binding of F8 for “build the thing”, so I open the Build menu (top of the Kate window), right-mouse-click on the menu entry Build Default Target, pick Configure Shortcut.. and bind F8 to it. By default, that key is bound to “switch to next view”, but I don’t use split views.

O noes, Ninja

The Build Plugin parses the output of the build command – e.g. make. The Build Plugin also knows how to deal with Ninja. However – and thanks to Christoph Cullmann for helping figure this out – Kate uses a trick to separate the Ninja output from compiler output (with make, this is apparently not needed). The environment variable NINJA_STATUS gets special treatment from Kate. The variable needs to be passed in to the container, so for a ninja build this is the Docker command (where -e NINJA_STATUS means “pass the value of the environment variable in to the container”).

docker run -a myproject \ -e NINJA_STATUS \ make -C /home/me/src/myproject/build -j4 Takeaway

With just a few steps, building a weird-ass codebase in a container can be a lot more pleasant by editing it outside the container with modern tools, and “remote build” is supported by Kate quite well.

If you wonder why I’m not using KDevelop, well, two things: it actually has less documentation on custom and remote builds, and two, it crashes just trying to read the weird-ass codebase. I have yet to debug the latter.

Categories: FLOSS Project Planets

Chris Moffitt: Using Document Properties to Track Your Excel Reports

Planet Python - Mon, 2022-06-13 15:25
Introduction

When doing analysis with Jupyter Notebooks, you will frequently find yourself generating ad-hoc Excel reports to distribute to your end-users. After time, you might end up with dozens (or hundreds) of notebooks and it can be challenging to remember which notebook generated which Excel report. I have started using Excel document properties to track which notebooks generate specific Excel files. Now, when a user asks for a refresh of a 6 month old report, I can easily find the notebook file and re-run the analysis. This simple process can save a lot of frustration for your future self. In this brief article will walk through how to set these properties and give some shortcuts for using VS Code to simplify the process.

Background

How often has this happened to you? You get an email from a colleague and they ask you to refresh some analysis you did for them many months ago? You can tell that you created the Excel file from a notebook but can’t remember which notebook you used? Despite trying to be as organized as possible it is inevitable that you will waste time trying to find the originating notebook.

The nice aspect of the Excel document properties is that most people don’t change them. So, even if a user renames the file, the properties you set will be easily visible and should point the way to where the original code sits on your system.

Adding Properties

If you’re using pandas and xlsxwriter, adding properties is relatively simple.

Here’s a simple notebook showing how I typically structure my analysis:

import pandas as pd from pathlib import Path from datetime import datetime today = datetime.now() report_file = Path.cwd() / 'reports' / f'sales_report_{today:%b-%d-%Y}.xlsx' url = 'https://github.com/chris1610/pbpython/blob/master/data/2018_Sales_Total_v2.xlsx?raw=True' df = pd.read_excel(url)

The important point is that I try to always use a standard naming convention that includes the date in the name as well as a standard directory structure.

Now, I’ll do a groupby to show sales by month for each account:

sales_summary = df.groupby(['name', pd.Grouper(key='date', freq='M')]).agg({ 'ext price': 'sum' }).unstack()

Here’s what the basic DataFrame output looks like:

The final step is to save the DataFrame to Excel using the pd.ExcelWriter context manager and set the document properties:

with pd.ExcelWriter(report_file, engine='xlsxwriter', date_format='mmm-yyyy', datetime_format='mmm-yyyy') as writer: sales_summary.to_excel(writer, sheet_name='2018-sales') workbook = writer.book workbook.set_properties({ 'category': r'c:\Users\cmoffitt\Documents\notebooks\customer_analysis', 'title' : '2018 Sales Summary', 'subject': 'Analysis for Anne Analyst', 'author': '1-Excel-Properties.ipynb', 'status': 'Initial draft', 'comments': 'src_dir: customer_analysis', 'keywords': 'notebook-generated' })

Once this is done, you can view the properties in a couple of different ways.

First, you can hover over the filename and get a quick view:

You can also view the details without opening Excel:

You can view the properties through Excel:

As you can see from the example, there are a handful of options for the properties. I encourage you to adjust these based on your own needs. For example, I save all of my work in a notebooks directory so it’s most useful to me to specify the src_dir in the Comments section. This will quickly point me to the right directory and the Authors property lets me know which specific file I used.

Observant readers will notice that I used this as an example to show how to adjust the date formats of the Excel output as well. As you can see below, I have adjusted the Excel output so that only the month and year are shown in the header. I find this much easier than going in and adjusting every example by hand.

Here’s what it looks like now:

Using VS Code Snippets

If you find this helpful, you may want to set up a snippet in VS Code to make this easier. I covered how to create snippets in this article so refer back to that for a refresher.

Here is a starter snippet to save the file to Excel and populate some properties:

"Write Excel": { "prefix": "we", "body": [ "# Excelwriter", "with pd.ExcelWriter(report_file, engine='xlsxwriter', date_format='mmm-yyyy', datetime_format='mmm-yyyy') as writer:", "\t$1.to_excel(writer, sheet_name='$2')", "\tworkbook = writer.book", "\tworkbook.set_properties({'category': r'$TM_DIRECTORY', 'author': '$TM_FILENAME'})", ], "description": "Write Excel file" }

One nice benefit of using the snippet is that you can access VS Code variables such as $TM_DIRECTORY and $TM_FILENAME to pre-populate the current path and name.

Conclusion

When working with Jupyter Notebooks it is important to have a consistent process for organizing and naming your files and directories. Otherwise the development process can get very chaotic. Even with good organization skills, it is easy to lose track of which scripts generate which outputs. Using the Excel document properties can be a quick and relatively painless way to lay out some breadcrumbs so that it is easy to recreate your analysis.

Let me know in the comments if you have any other tips you’ve learned over the years.

Categories: FLOSS Project Planets

Ben Hutchings: Debian LTS work, May 2022

Planet Debian - Mon, 2022-06-13 14:30

In May I was assigned 11 hours of work by Freexian's Debian LTS initiative and carried over 13 hours from April. I worked 8 hours, and will carry over the remaining time to June.

I spent some time triaging security issues for Linux, working out which of them were fixed upstream and which actually applied to the versions provided in Debian 9 "stretch". I rebased the Linux 4.9 (linux) package on the latest stable update, but did not make an upload this month. I started backporting several security fixes to 4.9, but those still have to be tested and reviewed.

Categories: FLOSS Project Planets

Talking Drupal: Talking Drupal #351 - Core Theming $h!t

Planet Drupal - Mon, 2022-06-13 14:00

Today we are talking about Core Theming with Cristina Chumillas.

www.talkingDrupal.com/351

Topics
  • What’s newe in core themeing?
  • Why is Claro in core important?
  • Why is Olivero in core important?
  • Why was it so long between new themes?
  • Continuous improvement?
  • What is the biggest improvement?
  • What happens to old themes?
  • Accessibility
  • CSS
    • Build tools
  • Drupal 10
    • IE
    • UC
  • Compound elements
  • Getting involved
Resources Guests Hosts

Nic Laflin - www.nLighteneddevelopment.com @nicxvan John Picozzi - www.epam.com @johnpicozzi Mike Herchel - herchel.com - @mikeherchel

MOTW

Quicklink This module provides an implementation of Google Chrome Lab’s Quicklink library for Drupal. Quicklink is a lightweight (< 1kb compressed) JavaScript library that enables faster subsequent page-loads by prefetching in-viewport links during idle time.

Categories: FLOSS Project Planets

Edward Betts: Fixing spelling in GitHub repos using codespell

Planet Debian - Mon, 2022-06-13 13:24

Codespell is a spell checker specifically designed for finding misspellings in source code.

I've been using it to correct spelling mistakes in GitHub repos sine 2016.

Most spell checkers use a list of valid words and highlighting any word in a document that is not in the word list. This method doesn't work for source code because code contains abbreviations and words joined together without spaces, a spell checker will generate too many false positives.

Codespell uses a different approach, instead of a list of valid words it has a dictionary of common misspellings.

Currently the codespell dictionary includes 34,466 known misspellings. I've contributed 300 misspellings to the dictionary.

Whenever I find an interesting open source project I run codespell to check for spelling mistakes. Most projects have spelling mistakes and I can send a pull request to fix them.

In 2019 Microsoft made the Windows calculator open source and uploaded it to GitHub. I used codespell to find some spelling mistakes, sent them a pull request and they accepted it.

A great source for GitHub repos to spell check is Hacker News. Let's have a look.

Hacker News has a link to forum software called Flarum. I can use codespell to look for spelling mistakes. When I'm looking for errors in a GitHub repo I don't fork the project until I know there is a spelling mistake to fix.

edward@x1c9 ~/spelling> git clone git@github.com:flarum/flarum.git Cloning into &aposflarum&apos... remote: Enumerating objects: 1338, done. remote: Counting objects: 100% (42/42), done. remote: Compressing objects: 100% (23/23), done. remote: Total 1338 (delta 21), reused 36 (delta 19), pack-reused 1296 Receiving objects: 100% (1338/1338), 725.02 KiB | 1.09 MiB/s, done. Resolving deltas: 100% (720/720), done. edward@x1c9 ~/spelling> cd flarum/ edward@x1c9 ~/spelling/flarum (master)> codespell -q3 ./public/web.config:13: sensitve ==> sensitive edward@x1c9 ~/spelling/flarum (master)> gh repo fork ✓ Created fork EdwardBetts/flarum ? Would you like to add a remote for the fork? Yes ✓ Added remote origin edward@x1c9 ~/spelling/flarum (master)> git checkout -b spelling Switched to a new branch &aposspelling&apos edward@x1c9 ~/spelling/flarum (spelling)> codespell -q3 ./public/web.config:13: sensitve ==> sensitive edward@x1c9 ~/spelling/flarum (spelling)> codespell -q3 -w FIXED: ./public/web.config edward@x1c9 ~/spelling/flarum (spelling)> git commit -am "Correct spelling mistakes" [spelling bbb04c7] Correct spelling mistakes 1 file changed, 1 insertion(+), 1 deletion(-) edward@x1c9 ~/spelling/flarum (spelling)> git push -u origin Enumerating objects: 7, done. Counting objects: 100% (7/7), done. Delta compression using up to 8 threads Compressing objects: 100% (4/4), done. Writing objects: 100% (4/4), 360 bytes | 360.00 KiB/s, done. Total 4 (delta 3), reused 0 (delta 0), pack-reused 0 remote: Resolving deltas: 100% (3/3), completed with 3 local objects. remote: remote: Create a pull request for &aposspelling&apos on GitHub by visiting: remote: https://github.com/EdwardBetts/flarum/pull/new/spelling remote: To github.com:EdwardBetts/flarum.git * [new branch] spelling -> spelling branch &aposspelling&apos set up to track &aposorigin/spelling&apos. edward@x1c9 ~/spelling/flarum (spelling)> gh pr create Creating pull request for EdwardBetts:spelling into master in flarum/flarum ? Title Correct spelling mistakes ? Choose a template Open a blank pull request ? Body <Received> ? What&aposs next? Submit https://github.com/flarum/flarum/pull/81 edward@x1c9 ~/spelling/flarum (spelling)>

That worked. I found one spelling mistake, the word "sensitive" was spelled wrong. I forked the repo, fixed the spelling mistake and submitted the fix as a pull request.

The maintainer of Flarum accepted my pull request.

Fixing spelling mistakes in Bootstrap helped me unlocked the Mars 2020 Contributor achievements on GitHub.

Why not try running codespell on your own codebase? You'll probably find some spelling mistakes to fix.

Categories: FLOSS Project Planets

GNU Guix: Celebrating 10 years of Guix in Paris, 16–18 September

GNU Planet! - Mon, 2022-06-13 11:00

It’s been ten years of GNU Guix! To celebrate, and to share knowledge and enthusiasm, a birthday event will take place on September 16–18th, 2022, in Paris, France. The program is being finalized, but you can already register!

This is a community event with several twists to it:

  • Friday, September 16th, is dedicated to reproducible research workflows and high-performance computing (HPC)—the focuses of the Guix-HPC effort. It will consist of talks and experience reports by scientists and practitioners.
  • Saturday targets Guix and free software enthusiasts, users and developers alike. We will reflect on ten years of Guix, show what it has to offer, and present on-going developments and future directions.
  • on Sunday, users, developers, developers-to-be, and other contributors will discuss technical and community topics and join forces for hacking sessions, unconference style.

Check out the web site and consider registering as soon as possible so we can better estimate the size of the birthday cake!

If you’re interested in presenting a topic, in facilitating a session, or in organizing a hackathon, please get in touch with the organizers at guix-birthday-event@gnu.org and we’ll be happy to make room for you. We’re also looking for people to help with logistics, in particular during the event; please let us know if you can give a hand.

Whether you’re a scientist, an enthusiast, or a power user, we’d love to see you in September. Stay tuned for updates!

About GNU Guix

GNU Guix is a transactional package manager and an advanced distribution of the GNU system that respects user freedom. Guix can be used on top of any system running the Hurd or the Linux kernel, or it can be used as a standalone operating system distribution for i686, x86_64, ARMv7, AArch64 and POWER9 machines.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. When used as a standalone GNU/Linux distribution, Guix offers a declarative, stateless approach to operating system configuration management. Guix is highly customizable and hackable through Guile programming interfaces and extensions to the Scheme language.

Categories: FLOSS Project Planets

Real Python: The subprocess Module: Wrapping Programs With Python

Planet Python - Mon, 2022-06-13 10:00

If you’ve ever wanted to simplify your command-line scripting or use Python alongside command-line applications—or any applications for that matter—then the Python subprocess module can help. From running shell commands and command-line applications to launching GUI applications, the Python subprocess module can help.

By the end of this tutorial, you’ll be able to:

  • Understand how the Python subprocess module interacts with the operating system
  • Issue shell commands like ls or dir
  • Feed input into a process and use its output.
  • Handle errors when using subprocess
  • Understand the use cases for subprocess by considering practical examples

In this tutorial, you’ll get a high-level mental model for understanding processes, subprocesses, and Python before getting stuck into the subprocess module and experimenting with an example. After that, you’ll start exploring the shell and learn how you can leverage Python’s subprocess with Windows and UNIX-based shells and systems. Specifically, you’ll cover communication with processes, pipes and error handling.

Note: subprocess isn’t a GUI automation module or a way to achieve concurrency. For GUI automation, you might want to look at PyAutoGUI. For concurrency, take a look at this tutorial’s section on modules related to subprocess.

Once you have the basics down, you’ll be exploring some practical ideas for how to leverage Python’s subprocess. You’ll also dip your toes into advanced usage of Python’s subprocess by experimenting with the underlying Popen() constructor.

Join Now: Click here to join the Real Python Newsletter and you'll never miss another Python tutorial, course update, or post.

Processes and Subprocesses

First off, you might be wondering why there’s a sub in the Python subprocess module name. And what exactly is a process, anyway? In this section, you’ll answer these questions. You’ll come away with a high-level mental model for thinking about processes. If you’re already familiar with processes, then you might want to skip directly to basic usage of the Python subprocess module.

Processes and the Operating System

Whenever you use a computer, you’ll always be interacting with programs. A process is the operating system’s abstraction of a running program. So, using a computer always involve processes. Start menus, app bars, command-line interpreters, text editors, browsers, and more—every application comprises one or more processes.

A typical operating system will report hundreds or even thousands of running processes, which you’ll get to explore shortly. However, central processing units (CPUs) typically only have a handful of cores, which means that they can only run a handful of instructions simultaneously. So, you may wonder how thousands of processes can appear to run at the same time.

In short, the operating system is a marvelous multitasker—as it has to be. The CPU is the brain of a computer, but it operates at the nanosecond timescale. Most other components of a computer are far slower than the CPU. For instance, a magnetic hard disk read takes thousands of times longer than a typical CPU operation.

If a process needs to write something to the hard drive, or wait for a response from a remote server, then the CPU would sit idle most of the time. Multitasking keeps the CPU busy.

Part of what makes the operating system so great at multitasking is that it’s fantastically organized too. The operating system keeps track of processes in a process table or process control block. In this table, you’ll find the process’s file handles, security context, references to its address spaces, and more.

The process table allows the operating system to abandon a particular process at will, because it has all the information it needs to come back and continue with the process at a later time. A process may be interrupted many thousands of times during execution, but the operating system always finds the exact point where it left off upon returning.

An operating system doesn’t boot up with thousands of processes, though. Many of the processes you’re familiar with are started by you. In the next section, you’ll look into the lifetime of a process.

Process Lifetime

Think of how you might start a Python application from the command line. This is an instance of your command-line process starting a Python process:

The process that starts another process is referred to as the parent, and the new process is referred to as the child. The parent and child processes run mostly independently. Sometimes the child inherits specific resources or contexts from the parent.

As you learned in Processes and the Operating System, information about processes is kept in a table. Each process keeps track of its parents, which allows the process hierarchy to be represented as a tree. You’ll be exploring your system’s process tree in the next section.

Note: The precise mechanism for creating processes differs depending on the operating system. For a brief overview, the Wikipedia article on process management has a short section on process creation.

For more details about the Windows mechanism, check out the win32 API documentation page on creating processes

On UNIX-based systems, processes are typically created by using fork() to copy the current process and then replacing the child process with one of the exec() family of functions.

The parent-child relationship between a process and its subprocess isn’t always the same. Sometimes the two processes will share specific resources, like inputs and outputs, but sometimes they won’t. Sometimes child processes live longer than the parent. A child outliving the parent can lead to orphaned or zombie processes, though more discussion about those is outside the scope of this tutorial.

When a process has finished running, it’ll usually end. Every process, on exit, should return an integer. This integer is referred to as the return code or exit status. Zero is synonymous with success, while any other value is considered a failure. Different integers can be used to indicate the reason why a process has failed.

Read the full article at https://realpython.com/python-subprocess/ »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

PyCon: PyCon US 2022 Transparency Report

Planet Python - Mon, 2022-06-13 09:16
With a return to in-person events, PyCon US has taken care to maintain excellency in the safety of our events both through our Code of Conduct and our Health and Safety Guidelines. A key piece of the enforcement of these is the transparency of its outcomes. Below, you’ll find details on the reports that we received for Code of Conduct violations as well as self-reported cases of COVID-19.Code of ConductPyCon US strives to provide a safe and inclusive environment for all attendees. Our Code of Conduct has been put in place as a guideline to ensure that community members feel safe during the event.

This year we continued our efforts to improve our Code of Conduct (CoC) response and reporting by offering staff, volunteers, and community members an opportunity to participate in a CoC training. In having more people trained, we established a more robust and diverse team to accept reports and respond to incidents both at in-person and online events.

We have prepared the following to help the community understand what kind of incident reports we received and how the staff responded during PyCon US 2022.
Summary of Reported Incidents

There were a total of 7 reports, all made via email. Note that all violations that occurred during the conference were reported via our Code of Conduct email address and thus, mask violations are also included here:
  • 3 reports regarding the mask requirement where vendors, staff, or attendees were not following the guidelines.
    • 1 report that a vendor's staff were not following the mask requirement in the session rooms.
      • The vendor followed up with specific staff and required them to follow the guidelines or be sent home.
    • 2 emails advised of a location where the mask requirement was not being followed. 
      • Our staff went to that location to address the individuals
  • 1 report of inappropriate social media direct message. The reporter asked that this not be a formal report but as an informational report should there be more.
    • No action was taken
  • 1 report of verbiage on an open space title that was inappropriate.
    • Staff went to the open space board and removed the inappropriate card and replaced it with a shorter title version with a note as to why it was changed. The author of the card was not identified to follow-up.
  • 2 reports of inappropriate comments during an event.
    • 1 - Staff addressed the report with the presenter who made the comment. An apology email was sent to the reporter and a public apology was made by the presenter.
    • 2 - Staff addressed the report with the presenter who made the comments. The necessity of more thoughtful approaches to addressing an international audience was made clear to the individual.
Health and Safety GuidelinesWe took the health and safety of those who chose to attend PyCon US in-person very seriously. The mask requirement, vaccine verification process, and social-distance seating in all our event rooms were part of our efforts to provide a safe event. 
Our staff actively monitored our spaces and reminded attendees of the mask requirement. We made sure our meal setups were well-spaced and provided adequate seating for everyone to eat while seated.Summary of Reported CasesThere were reports directly to our staff from 12 people that tested positive for COVID-19 either during or following the event. Our post-conference survey did ask the question of COVID-19 status, of 219 responses 16 responded they tested positive. It can not be determined if any of the 16 were also part of the 12 that contacted us directly. We are also unable to determine if the exposure occurred within the conference center space itself or external events, parties, and gatherings around the city. 
Thank you to all those who attended PyCon US 2022 and followed the guidelines put in place!
Categories: FLOSS Project Planets

Python for Beginners: Index of Minimum Element in a List in Python

Planet Python - Mon, 2022-06-13 09:00

We use lists in a python program to store different types of objects when we need random access. In this article, we will discuss different ways to find the index of the minimum element in a list in python.

Table of Contents
  1. Index of Minimum Element in a List Using for Loop
    1. The len() Function in Python
    2. The range() Function in Python
  2. Index of Minimum Element in a List Using the min() Function and index() Method
    1. The min() Function in Python
    2. The index() Method in Python
  3. Index of Minimum Element in a List Using Numpy Module
  4. Index of the Minimum Element in a List in Case of Multiple Occurrences
    1. Index of Minimum Element in a List Using the min() Function and For Loop
    2. Index of Minimum Element in a List Using the min() Function and List Comprehension
    3. Index of Minimum Element in a List Using min() Function and enumerate() Function
  5. Conclusion
Index of Minimum Element in a List Using for Loop

We use a for loop in python to iterate over the elements of a container object like a list. To find the index of minimum element in a list using a for loop in python, we can use the len() function and the range() function.

The len() Function in Python

The len() function in python is used to find the length of a collection object like a list or tuple. It takes the container object like a list as its input argument and returns the length of the collection object after execution. You can observe this in the following example.

myList = [1, 2, 23, 32, 12, 44, 34, 55, 46, 21, 12] print("The list is:", myList) list_len = len(myList) print("Length of the list is:", list_len)

Output:

The list is: [1, 2, 23, 32, 12, 44, 34, 55, 46, 21, 12] Length of the list is: 11

Here, we have passed a list with 11 elements to the len() function. After execution, it returns the same value.

The range() Function in Python

The range() function is used to generate a sequence of numbers in python. In the simplest case, the range() function takes a positive number N as an input argument and returns a sequence of numbers containing numbers from 0 to N-1. You can observe this in the following example.

sequence = range(11) print("The sequence is:", sequence)

Output:

The sequence is: range(0, 11)

To find the index of minimum element in a list in python using the for loop, len() function, and the range() function, we will use the following steps.

  • First, we will calculate the length of the input list using the len() function. We will store the value in a variable list_len.
  • After calculating the length of the list, we will create a sequence of numbers from 0 to list_len-1 using the range() function. 
  • Now, we will define a variable min_index and assign it the value 0 assuming that the minimum element of the list is present at index 0.
  • After creating the sequence, we will iterate over the numbers in the sequence using a for loop. While iteration, we will access the elements in the input list at each index.
  • At each index, we will check if the element at the current index is less than the element at the min_index index in the list. If the current element is less than the element at min_index, we will update min_index to the current index.

After execution of the for loop, you will get the index of the minimum element in the list in the min_index variable. You can observe this in the following example.

myList = [11, 2, 23, 1, 32, 12, 44, 34, 55, 46, 21, 1, 12] print("The list is:", myList) min_index = 0 list_len = len(myList) for index in range(list_len): if myList[index] < myList[min_index]: min_index = index print("Index of the minimum element is:", min_index)

Output:

The list is: [11, 2, 23, 1, 32, 12, 44, 34, 55, 46, 21, 1, 12] Index of the minimum element is: 3

In the above approach, you will get the leftmost index of the minimum element if there are multiple occurrences of the element in the list. To obtain the rightmost index at which the minimum element is present, you can use the less than or equal to operator instead of less than operator while comparing elements of the list. You can observe this in the following example.

myList = [11, 2, 23, 1, 32, 12, 44, 34, 55, 46, 21, 1, 12] print("The list is:", myList) min_index = 0 list_len = len(myList) for index in range(list_len): if myList[index] <= myList[min_index]: min_index = index print("Index of the minimum element is:", min_index)

Output:

The list is: [11, 2, 23, 1, 32, 12, 44, 34, 55, 46, 21, 1, 12] Index of the minimum element is: 11 Index of Minimum Element in a List Using the min() Function and index() Method

Instead of iterating the entire list using for loop, we can use the min() function and the index() method to find the index of the minimum element in a list in python.

The min() Function in Python

The min() function is used to find the minimum element in a container object like a list, tuple, or set. The min() function takes a collection object like a list as its input argument. After execution, it returns the minimum element in the list. You can observe this in the following example.

myList = [11, 2, 23, 1, 32, 12, 44, 34, 55, 46, 21, 1, 12] print("The list is:", myList) min_val = min(myList) print("The minimum value is:", min_val)

Output:

The list is: [11, 2, 23, 1, 32, 12, 44, 34, 55, 46, 21, 1, 12] The minimum value is: 1 The index() Method in Python

The index() method is used to find the position of an element in a list. When invoked on a list, the index() method takes an element as its input argument. After execution, it returns the index of the first occurrence of the element. For instance, we can find the index of element 23 in the list as shown below.

myList = [11, 2, 23, 1, 32, 12, 44, 34, 55, 46, 21, 1, 12] print("The list is:", myList) index = myList.index(23) print("Index of 23 is:", index)

Output:

The list is: [11, 2, 23, 1, 32, 12, 44, 34, 55, 46, 21, 1, 12] Index of 23 is: 2

As said above, if there are multiple occurrences of a single element, the index() method only returns the leftmost index of the element. If we would have passed element 1 as the input argument to the index() method, the output would have been 3 despite the fact that element 1 is also present at the index 11.

If the element given in the input argument is not present in the list, the index() method raises a ValueError exception saying that the element is not present in the list.  You can observe this in the following example.

myList = [11, 2, 23, 1, 32, 12, 44, 34, 55, 46, 21, 1, 12] print("The list is:", myList) index = myList.index(105) print("Index of 105 is:", index)

Output:

The list is: [11, 2, 23, 1, 32, 12, 44, 34, 55, 46, 21, 1, 12] Traceback (most recent call last): File "/home/aditya1117/PycharmProjects/pythonProject/string12.py", line 3, in <module> index = myList.index(105) ValueError: 105 is not in list

Here, we have passed 105 as an input argument to the index() method. However, 105 is not present in the list. Therefore, the program runs into the ValueError exception showing that 105 is not present in the list.

To find the index of the minimum element in a list using the min() function and the index() function, we will use the following steps.

First, we will find the minimum element in the list using the min() function. We will store the value in a variable min_val.

After finding the minimum value in the list, we will invoke the index() method on the list with min_val as its input argument. After execution, the index() method will return the index of the minimum element in the list. You can observe this in the following example.

myList = [11, 2, 23, 1, 32, 12, 44, 34, 55, 46, 21, 1, 12] print("The list is:", myList) min_val = min(myList) index = myList.index(min_val) print("Index of minimum value is:", index)

Output:

The list is: [11, 2, 23, 1, 32, 12, 44, 34, 55, 46, 21, 1, 12] Index of minimum value is: 3

This approach can only find the leftmost index of the minimum element in a list in case of multiple occurrences of the element. If you want to obtain the rightmost index of the minimum element, you can use the approach with for loop and the len() function discussed in the previous section.

Index of Minimum Element in a List Using Numpy Module

The numpy module has been designed to manipulate numbers and arrays in an efficient manner. We can also find the index of the minimum element in a list in python using the numpy module.

The numpy module provides us with the argmin() method to find the index of the minimum element in a NumPy array. The argmin() method, when invoked on a numpy array, returns the index of the minimum element in the array. 

To obtain the index of the minimum element in a list using the argmin() method, we will first convert the list into a numpy array. For this, we will use the array() function.

The array() function takes a list as its input argument and returns a numpy array. You can observe this in the following example.

import numpy myList = [11, 2, 23, 1, 32, 12, 44, 34, 55, 46, 21, 1, 12] print("The list is:", myList) array = numpy.array(myList) print("The array is:", array)

Output:

The list is: [11, 2, 23, 1, 32, 12, 44, 34, 55, 46, 21, 1, 12] The array is: [11 2 23 1 32 12 44 34 55 46 21 1 12]

After converting the list to a numpy array, we will invoke the argmin() method on the array. After execution, the argmin() method will return the index of the minimum element.

import numpy myList = [11, 2, 23, 1, 32, 12, 44, 34, 55, 46, 21, 1, 12] print("The list is:", myList) array = numpy.array(myList) min_index = array.argmin() print("Index of minimum element is:", min_index)

Output:

The list is: [11, 2, 23, 1, 32, 12, 44, 34, 55, 46, 21, 1, 12] Index of minimum element is: 3

When there are multiple occurrences of the minimum element, the argmin() function will return the leftmost index of the minimum element. In the above example, you can see that the argmin() method returns 3 even if the minimum element 1 is also present at index 11.

Index of the Minimum Element in a List in Case of Multiple Occurrences

Sometimes, we might need to find all the occurrences of the minimum element in a list. In the following sections, we will discuss different ways to find the index of the minimum element in a list in case of multiple occurrences.

Index of Minimum Element in a List Using the min() Function and For Loop

When there are multiple occurrences of the minimum element in the list, we can use the min() function and the for loop to obtain the indices of the minimum element. For this, we will use the following steps.

  • First, we will create an empty list named min_indices to store the indices of the minimum element. 
  • Then, we will find the length of the input list using the len() function. We will store the length in a variable list_len.
  • After obtaining the length of the list, we will create a sequence of numbers from 0 to list_len-1 using the range() function.
  • Next, we will find the minimum element in the list using the min() function. 
  • After finding the minimum element, we will iterate over the sequence of numbers using a for loop. While iteration, we will check if the element at the current index in the list is equal to the minimum element.
    • If we find an element that is equal to the minimum element, we will append its index to the min_indices list using the append() method. The append() method, when invoked on the min_indices list, will take the index as its input argument. After execution, it will add the index to the min_indices list as an element.
    • If the number at the current index is not equal to the minimum element, we will move to the next element.

After execution of the for loop, we will get the indices of the minimum element in the list. You can observe this in the following example.

myList = [11, 2, 23, 1, 32, 12, 44, 34, 55, 46, 21, 1, 12] print("The list is:", myList) min_val = min(myList) list_indices = [] list_len = len(myList) sequence = range(list_len) for index in sequence: if myList[index] == min_val: list_indices.append(index) print("Indices of minimum element are:", list_indices)

Output:

The list is: [11, 2, 23, 1, 32, 12, 44, 34, 55, 46, 21, 1, 12] Indices of minimum element are: [3, 11] Index of Minimum Element in a List Using the min() Function and List Comprehension

List comprehension is used to create a new list using the elements of an existing list by imposing some conditions on the elements. The general syntax of list comprehension is as follows.

new_list=[f(element) for element in existing_list if condition]

Here, 

  • new_list is the list created after the execution of the statement.
  • existing_list is the input list.
  • The element represents one of the elements in the existing list. This variable is used to iterate over the existing_list.
  • condition is a condition imposed on the element or f(element).

To find the indices of the minimum element in the list using list comprehension in python, we will use the following steps.

  • First, we will find the length of the input list using the len() function. We will store the length in a variable list_len.
  • After obtaining the length of the list, we will create a sequence of numbers from 0 to list_len-1 using the range() function.
  • Next, we will find the minimum element in the list using the min() function. 
  • After finding the minimum element, we will use list comprehension to obtain the list of indices of the minimum element. 
  • In list comprehension, 
    • We will use the sequence of numbers in place of the existing_list.
    • The element will represent the elements of the sequence of numbers.
    • f(element) will be equal to the element.
    • In place of the condition, we will use the condition that the value in the input list at the index element should be equal to the minimum element.

After execution of the statement, we will get new_list containing the indices of the minimum element in the list as shown below.

myList = [11, 2, 23, 1, 32, 12, 44, 34, 55, 46, 21, 1, 12] print("The list is:", myList) min_val = min(myList) list_len = len(myList) sequence = range(list_len) new_list = [index for index in sequence if myList[index] == min_val] print("Indices of minimum element are:", new_list)

Output:

The list is: [11, 2, 23, 1, 32, 12, 44, 34, 55, 46, 21, 1, 12] Indices of minimum element are: [3, 11] Index of Minimum Element in a List Using min() Function and enumerate() Function

Instead of using the len() function and the range() function, we can use the enumerate() function with the min() function to find the indices of minimum elements in a list. 

The enumerate() Function in Python

The enumerate() function is used to enumerate the elements of a container object such as a list or tuple.

The enumerate() function takes a container object like a list as its input argument. After execution, it returns an enumerated list containing tuples. Each tuple in the list contains two elements. The first element of the tuple is the index that is assigned to an element. The second element is the corresponding element from the original list that is given as input to the enumerate() function. You can observe this in the following example.

myList = [11, 2, 23, 1, 32, 12, 44, 34, 55, 46, 21, 1, 12] print("The list is:", myList) enumerated_list = list(enumerate(myList)) print("Enumerated list is:", enumerated_list)

Output:

The list is: [11, 2, 23, 1, 32, 12, 44, 34, 55, 46, 21, 1, 12] Enumerated list is: [(0, 11), (1, 2), (2, 23), (3, 1), (4, 32), (5, 12), (6, 44), (7, 34), (8, 55), (9, 46), (10, 21), (11, 1), (12, 12)]

To find the indices of minimum elements in the list using the enumerate() function, we will use the following steps.

  • First, we will create an empty list named min_indices to store the indices of the minimum element. 
  • After that, we will find the minimum element in the input list using the min() function. 
  • Then, we will create the enumerated list from the input list using the enumerate() function.
  • After obtaining the enumerated list, we will iterate over the tuples in the enumerated list using a for loop. 
  • While iterating the tuples, we will check if the element in the current tuple is equal to the minimum element. 
  • If the element in the current tuple is equal to the minimum element, we will append the current index to min_indices using the append() method. Otherwise, we will move to the next tuple in the enumerated list.

After execution of the for loop, we will get the indices of the minimum element in the min_indices list. You can observe this in the following example.

myList = [11, 2, 23, 1, 32, 12, 44, 34, 55, 46, 21, 1, 12] print("The list is:", myList) enumerated_list = list(enumerate(myList)) min_indices = [] min_element = min(myList) for element_tuple in enumerated_list: index = element_tuple[0] element = element_tuple[1] if element == min_element: min_indices.append(index) print("Indices of minimum element are:", min_indices)

Output:

The list is: [11, 2, 23, 1, 32, 12, 44, 34, 55, 46, 21, 1, 12] Indices of minimum element are: [3, 11]

Instead of using a for loop to iterate over the tuples in the enumerated list, we can use list comprehension to find the indices of the minimum element in the list as shown below.

myList = [11, 2, 23, 1, 32, 12, 44, 34, 55, 46, 21, 1, 12] print("The list is:", myList) enumerated_list = list(enumerate(myList)) min_element = min(myList) min_indices = [index for (index, element) in enumerated_list if element == min_element] print("Indices of minimum element are:", min_indices)

Output:

The list is: [11, 2, 23, 1, 32, 12, 44, 34, 55, 46, 21, 1, 12] Indices of minimum element are: [3, 11] Conclusion

In this article, we have discussed different ways to find the index of the minimum element in a list in python. We also discussed finding all the indices of the minimum element in the list in case of multiple occurrences of the minimum element. 

To find the index of the minimum element in a list in python, I would suggest you use the min() function with the index() method. This approach gives the leftmost index of the minimum element in case of multiple occurrences. 

If you need to find the rightmost index of the minimum element in a list in python, you can use the approach using the for loop and less than or equal to operator.

In case of multiple occurrences, if you need to obtain all the indices of the minimum element in a list in python, you can either use the approach with for loop or the approach with the enumerate() function. Both have almost the same execution efficiency.  

I hope you enjoyed reading this article. To learn more about python programming, you can read this article on how to remove all occurrences of a character in a list in Python. You might also like this article on how to check if a python string contains a number.

The post Index of Minimum Element in a List in Python appeared first on PythonForBeginners.com.

Categories: FLOSS Project Planets

Jacob Rockowitz: Baking a Recipe using the Schema.org Blueprints module for Drupal

Planet Drupal - Mon, 2022-06-13 08:58

In my last post, I introduced the Schema.org Blueprints module to the Drupal community. At DrupalCamp NJ, I shared a presentation about the module. The blog post and presentation managed to collect a bunch of helpful feedback, which encouraged me to simplify the module's dependencies and relationships. One of the biggest improvements is that JSON-LD is now fully supported with previews and dedicated API support. The entire module is still under active development. As you read this new post, new features are being added (and removed).

The reality is the Schema.org Blueprints module is turning into a pretty large undertaking. I will need help from individuals and organizations in the Drupal community to launch a stable release. At the same time, by not having an alpha release, I can frame out most of the core APIs and challenges quickly while making changes as needed. Leveraging a Schema.org-first approach to modeling content in Drupal has required a lot of thought and strategy. The sustainability of the module is going to need an equal amount of effort and a dedicated blog post. For now, I need to build up the Drupal community's interest in the module by creating content and demos that showcase the Schema.org Blueprints module in action by baking a recipe.

Recipes are among the most shared content types on the web and probably the most shared content off the web. In the Drupal community, we have chosen to provide a recipe website, named Umami, as our out-of-the-box demo site. Umami's recipe content type is based on https://Schema.org/Recipe. Umami's recipe...Read More

Categories: FLOSS Project Planets

Pages