FLOSS Project Planets

Evolving Web: Evolving Web Wins Pantheon Award for Social Impact

Planet Drupal - Tue, 2024-05-28 13:23

We are thrilled to announce that Evolving Web has been honored with the Social Impact Award in the Inaugural Pantheon Partner Awards for our work on the Planned Parenthood Direct website

The winners were announced at the Pantheon Partner dinner, held during DrupalCon Portland on May 6, 2024. Congratulations to the other winners who took to the stage with us: 

  • Elevated Third – Partner of the Year Award
  • WebMD Ignite – Innovation Award
  • HoundER – Rookie of the Year Award
  • Forum One – Customer First Award
  • Danny Pfeiffer – Friends of Pantheon Partners Award

Pantheon’s Partner Awards recognize the outstanding contributions of digital agencies that drive positive change. We’re proud to be acknowledged for our role in the Planned Parenthood Direct project, which supports reproductive rights and enhancing access to reproductive and sexual healthcare. Our work on the project demonstrates our commitment to creating impact through user-centric digital experiences.

A Mission-Driven Collaboration

In the U.S., reproductive and sexual health care services vary from state to state. Planned Parenthood Direct (PPD) aims to provide trusted care from anywhere by offering “on-the-go” services. We collaborated with PPD to build a secure, mobile-first website on that informs users of available services in their state. The site also encourages users to download the PPD app, which they can use to order birth control.

Designing for Impact and Inclusion

Our team undertook the challenge of creating a highly informative, accessible website that appeals to a younger audience.

  • We created dedicated pages for each state, ensuring they’re easy for PPD to update and optimized for search engines.
  • We created a new visual brand identity that incorporates bold design principles for a youthful, reassuring, and non-stigmatizing user experience.
  • Our mobile-first approach ensured that the site meets the needs of an audience who prefer mobile devices.
  • We also followed accessibility best practices to ensure a user-friendly experience for all, including users with disabilities. 

Protecting Users with Exceptional Security

Security was a paramount concern, given the political climate surrounding reproductive rights. We ensured a highly secure online experience using a decoupled architecture with Next.js for the front-end and Drupal 10 for the back-end. Hosting on Pantheon added additional layers of security, including HTTPS certificates and DDoS protection.


Setting PPD Up For Success & Growth 

Our work on the Planned Parenthood Direct website included the development of 17 custom components and 14 content types in Layout Builder. This empowers PPD’s content editors to create flexible, engaging, and visually appealing layouts. The results is streamlined content creation and management, allowing PPD to maintain and grow their website effectively.

Outstanding Results & Continued Commitment

The new Planned Parenthood Direct website has been instrumental in continuing PPD’s mission to support human rights and ensure access to sexual and reproductive healthcare.

A big thank you to Pantheon for recognizing our efforts, and to Planned Parenthood Direct for trusting us with this important project. We’re honoured to have partnered with you both.

As we celebrate this award, we’re reminded of the importance of our work and the impact it has on communities. We look forward to future opportunities to make a difference.

Partner with us to turn your vision into a powerful digital experience that drives change. 

+ more awesome articles by Evolving Web
Categories: FLOSS Project Planets

Go Deh: Recreating the CVM algorithm for estimating distinct elements gives problems

Planet Python - Tue, 2024-05-28 12:15

 

 Someone at work posted a link to this Quanta Magazine article. It describes a novel, and seemingly straight-forward way to estimate the number of distinct elements in a datastream. 

Quanta describes the algorithm, and as an example gives "counting the number of distinct words in Hamlet".

Following Quanta

I looked at the description and decided to follow their text. They carefully described each round of the algorithm which I coded up and then looked for the generalizations and implemented a loop over alll items in the stream ....

It did not work! I got silly numbers. I could download Hamlet split it into words, (around 32,000), do len(set(words) to get the exact number of distinct words, (around 7,000), then run it through the algorithm and get a stupid result with tens of digits for the estimated number of distinct words.
I re-checked my implementation of the Quanta-described algorithm and couldn't see any mistake, but I had originally noticed a link to the original paper. I did not follow it at first as original papers can be heavily into maths notation and I prefer reading algorithms described in code/pseudocode. 

I decided to take a look at the original.

The CVM Original Paper

I scanned the paper.

I read the paper.

I looked at Algorithm 1 as a probable candidate to decypher into Python, but the description was cryptic. Heres that description taken from the paper:

AI To the rescue!?

I had a brainwave💡lets chuck it at two AI's and see what they do. I had Gemini and I had Copilot to hand and asked them each to express Algorithm 1 as Python. Gemini did something, and Copilot finally did something but I first had to open the page in Microsoft Edge.
There followed hours of me reading and cross-comparing between the algorithm and the AI's. If I did not understand where something came from I would ask the generating AI; If I found an error I would first, (and second and...), try to get the AI to make a fix I suggested.

At this stage I was also trying to get a feel for how the AI's could help me, (now way past what I thought the algorithm should be, just to see what it would take to get those AI's to cross T's and dot I's on a good solution).
Not a good use of time! I now know that asking questions to update one of the 20 to 30 lines of the Python function might fix that line, but unfix another line you had fixed before. Code from the AI does not have line numbers making it difficult to state what needs changing, and where.They can suggest type hints and create the beginnings of docstrings, but, for example, it pulled out the wrong authors for the name of the algorithm.
In line 1 of the algorithm, the initialisation of thresh is clearly shown, I thought, but both AI's had difficulty getting the Python right. eventually I cut-n-pasted the text into each AI, where they confidentially said "OF course...", made a change, and then I had to re-check for any other changes.

My Code

I first created this function:

def F0_Estimator(stream: Collection[Any], epsilon: float, delta: float) -> float:    """    ...    """    p = 1    X = set()    m = len(stream)    thresh = math.ceil(12 / (epsilon ** 2) * math.log(8 * m / delta))
    for item in stream:        X.discard(item)        if random.random() < p:            X.add(item)        if len(X) == thresh:            X = {x_item for x_item in X                    if random.random() < 0.5}            p /= 2    return len(X) / p

I tested it with Hamlet data and it made OK estimates.

Elated, I took a break.

Hacker News

The next evening I decided to do a search to see If anyone else was talking about the algorithm and found a thread on Hacker News that was right up my street. People were discussing those same problems found in the Quanta Article - and getting similar ginormous answers. They had one of the original Authors of the paper making comments! And others had created code from the actual paper and said it was also easier than the Quanta description.

The author mentioned that no less than Donald Knuth had taken an interest in their algorithm and had noted that the expression starting `X = ...` four lines from the end could, thoretically, make no change to X, and the solution was to encase the assignment in a while loop that only exited if len(X) < thresh.

Code update

I decided to add that change:

def F0_Estimator(stream: Collection[Any], epsilon: float, delta: float) -> float:    """    Estimates the number of distinct elements in the input stream.
    This function implements the CVM algorithm for the problem of     estimating the number of distinct elements in a stream of data.        The stream object must support an initial call to __len__
    Parameters:    stream (Collection[Any]): The input stream as a collection of hashable         items.    epsilon (float): The desired relative error in the estimate. It must be in         the range (0, 1).    delta (float): The desired probability of the estimate being within the         relative error. It must be in the range (0, 1).
    Returns:    float: An estimate of the number of distinct elements in the input stream.    """    p = 1    X = set()    m = len(stream)    thresh = math.ceil(12 / (epsilon ** 2) * math.log(8 * m / delta))
    for item in stream:        X.discard(item)        if random.random() < p:            X.add(item)        if len(X) == thresh:            while len(X) == thresh:  # Force a change                X = {x_item for x_item in X                     if random.random() < 0.5}  # Random, so could do nothing            p /= 2    return len(X) / p


thresh

In the code above, the variable thresh, (threshhold), named from Algorithm 1, is used in the Quanta article to describe the maximum storage available to keep items from the stream that have been seen before. You must know the length of the stream - m, epsilon, and delta to calculate thresh.

If you were to have just the stream and  thresh as the arguments you could return both the estimate of the number of distinct items in the stream as well as counting the number of total elements in the stream.
Epsilon could be calculated from the numbers we now know.

def F0_Estimator2(stream: Iterable[Any],                 thresh: int,                  ) -> tuple[float, int]:    """    Estimates the number of distinct elements in the input stream.
    This function implements the CVM algorithm for the problem of     estimating the number of distinct elements in a stream of data.        The stream object does NOT have to support a call to __len__
    Parameters:    stream (Iterable[Any]): The input stream as an iterable of hashable         items.    thresh (int): The max threshhold of stream items used in the estimation.py
    Returns:    tuple[float, int]: An estimate of the number of distinct elements in the         input stream, and the count of the number of items in stream.    """    p = 1    X = set()    m = 0  # Count of items in stream
    for item in stream:        m += 1        X.discard(item)        if random.random() < p:            X.add(item)        if len(X) == thresh:            while len(X) == thresh:  # Force a change                X = {x_item for x_item in X                     if random.random() < 0.5}  # Random, so could do nothing            p /= 2                return len(X) / p, m
def F0_epsilon(               thresh: int,               m: int,               delta: float=0.05,  #  0.05 is 95%              ) -> float:    """    Calculate the relative error in the estimate from F0_Estimator2(...)
    Parameters:    thresh (int): The thresh value used in the call TO F0_Estimator2.    m (int): The count of items in the stream FROM F0_Estimator2.    delta (float): The desired probability of the estimate being within the         relative error. It must be in the range (0, 1) and is usually 0.05        to 0.01, (95% to 99% certainty).
    Returns:    float: The calculated relative error in the estimate
    """    return math.sqrt(12 / thresh * math.log(8 * m / delta))

Testingdef stream_gen(k: int=30_000, r: int=7_000) -> list[int]:    "Create a randomised list of k ints of up to r different values."    return random.choices(range(r), k=k)
def stream_stats(s: list[Any]) -> tuple[int, int]:    length, distinct = len(s), len(set(s))    return length, distinct
# %% print("CVM ALGORITHM ESTIMATION OF NUMBER OF UNIQUE VALUES IN A STREAM")
stream_size = 2**18reps = 5target_uniques = 1while target_uniques < stream_size:    the_stream = stream_gen(stream_size+1, target_uniques)    target_uniques *= 4    size, unique = stream_stats(the_stream)
    print(f"\n  Actual:\n    {size = :_}, {unique = :_}\n  Estimations:")
    delta = 0.05    threshhold = 2    print(f"    All runs using {delta = :.2f} and with estimate averaged from {reps} runs:")    while threshhold < size:        estimate, esize = F0_Estimator2(the_stream.copy(), threshhold)        estimate = sum([estimate] +                    [F0_Estimator2(the_stream.copy(), threshhold)[0]                        for _ in range(reps - 1)]) / reps        estimate = int(estimate + 0.5)        epsilon = F0_epsilon(threshhold, esize, delta)        print(f"      With {threshhold = :7_} -> "            f"{estimate = :_}, +/-{epsilon*100:.0f}%"            + (f" {esize = :_}" if esize != size else ""))        threshhold *= 8

The algorithm generates an estimate based on random sampling, so I run it multiple times for the same input and report the mean estimate from those runs.

Sample output

 

CVM ALGORITHM ESTIMATION OF NUMBER OF UNIQUE VALUES IN A STREAM
  Actual:    size = 262_145, unique = 1  Estimations:    All runs using delta = 0.05 and with estimate averaged from 5 runs:      With threshhold =       2 -> estimate = 1, +/-1026%      With threshhold =      16 -> estimate = 1, +/-363%      With threshhold =     128 -> estimate = 1, +/-128%      With threshhold =   1_024 -> estimate = 1, +/-45%      With threshhold =   8_192 -> estimate = 1, +/-16%      With threshhold =  65_536 -> estimate = 1, +/-6%
  Actual:    ...   Actual:    size = 262_145, unique = 1_024  Estimations:    All runs using delta = 0.05 and with estimate averaged from 5 runs:      With threshhold =       2 -> estimate = 16_384, +/-1026%      With threshhold =      16 -> estimate = 768, +/-363%      With threshhold =     128 -> estimate = 1_101, +/-128%      With threshhold =   1_024 -> estimate = 1_018, +/-45%      With threshhold =   8_192 -> estimate = 1_024, +/-16%      With threshhold =  65_536 -> estimate = 1_024, +/-6%
  Actual:    size = 262_145, unique = 4_096  Estimations:    All runs using delta = 0.05 and with estimate averaged from 5 runs:      With threshhold =       2 -> estimate = 13_107, +/-1026%      With threshhold =      16 -> estimate = 3_686, +/-363%      With threshhold =     128 -> estimate = 3_814, +/-128%      With threshhold =   1_024 -> estimate = 4_083, +/-45%      With threshhold =   8_192 -> estimate = 4_096, +/-16%      With threshhold =  65_536 -> estimate = 4_096, +/-6%
  Actual:    size = 262_145, unique = 16_384  Estimations:    All runs using delta = 0.05 and with estimate averaged from 5 runs:      With threshhold =       2 -> estimate = 0, +/-1026%      With threshhold =      16 -> estimate = 15_155, +/-363%      With threshhold =     128 -> estimate = 16_179, +/-128%      With threshhold =   1_024 -> estimate = 16_986, +/-45%      With threshhold =   8_192 -> estimate = 16_211, +/-16%      With threshhold =  65_536 -> estimate = 16_384, +/-6%
  Actual:    size = 262_145, unique = 64_347  Estimations:    All runs using delta = 0.05 and with estimate averaged from 5 runs:      With threshhold =       2 -> estimate = 26_214, +/-1026%      With threshhold =      16 -> estimate = 73_728, +/-363%      With threshhold =     128 -> estimate = 61_030, +/-128%      With threshhold =   1_024 -> estimate = 64_422, +/-45%      With threshhold =   8_192 -> estimate = 64_760, +/-16%      With threshhold =  65_536 -> estimate = 64_347, +/-6%

 Looks good!

Wikipedia

Another day, and I decide to start writing this blog post. I searched again and found the Wikipedia article on what it called the Count-distinct problem

Looking through it, It had this wrong description of the CVM algorithm:

The, (or a?),  problem with the wikipedia entry is that it shows

p ← p 2

...within the while loop. You need an enclosing if |B| >= s for the while loop and the  assignment to p outside the while loop, but inside this new if statement.

It's tough!

Both Quanta Magazine, and whoever added the algorithm to Wikipedia got the algorithm wrong.

I've written around two hundred tasks on site Rosettacode.org for over a decade. Others had to read my description and create code in their chosen language to implement those tasks. I have learnt from the feedback I got on talk pages to hone that craft, but details matter. Examples matter. Constructive feedback matters.

END.

 

Categories: FLOSS Project Planets

Real Python: Efficient Iterations With Python Iterators and Iterables

Planet Python - Tue, 2024-05-28 10:00

Python’s iterators and iterables are two different but related tools that come in handy when you need to iterate over a data stream or container. Iterators power and control the iteration process, while iterables typically hold data that you want to iterate over one value at a time.

Iterators and iterables are fundamental components of Python programming, and you’ll have to deal with them in almost all your programs. Learning how they work and how to create them is key for you as a Python developer.

In this video course, you’ll learn how to:

  • Create iterators using the iterator protocol in Python
  • Understand the differences between iterators and iterables
  • Work with iterators and iterables in your Python code
  • Use generator functions and the yield statement to create generator iterators
  • Build your own iterables using different techniques, such as the iterable protocol
  • Use the asyncio module and the await and async keywords to create asynchronous iterators

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Python Software Foundation: Thinking about running for the Python Software Foundation Board of Directors? Let’s talk!

Planet Python - Tue, 2024-05-28 06:27

PSF Board elections are a chance for the community to choose representatives to help the PSF create a vision for and build the future of the Python community. This year there are 3 seats open on the PSF board. Check out who is currently on the PSF Board. (Débora Azevedo, Kwon-Han Bae, and Tania Allard are at the end of their current terms.)

Office Hours Details

This year, the PSF Board is running Office Hours so you can connect with current members to ask questions and learn more about what being a part of the Board entails. There will be two Office Hour sessions:

  • June 11th, 4 PM UTC
  • June 18th, 12 PM UTC

Make sure to check what time that is for you. We welcome you to join the PSF Discord and navigate to the #psf-elections channel to participate in Office Hours. The server is moderated by PSF Staff and locked between office hours sessions. If you’re new to Discord, check out some Discord Basics to help you get started.

Who runs for the Board?

People who care about the Python community, who want to see it flourish and grow, and also have a few hours a month to attend regular meetings, serve on committees, participate in conversations, and promote the Python community. Check out our Life as Python Software Foundation Director video to learn more about what being a part of the PSF Board entails. We also invite you to review our Annual Impact Report for 2023 to learn more about the PSF mission and what we do.

Nomination info

You can nominate yourself or someone else. We encourage you to reach out to people before you nominate them to ensure they are enthusiastic about the potential of joining the Board. Nominations open on Tuesday, June 11th, 2:00 PM UTC, so you have a few weeks to research the role and craft a nomination statement. The nomination period ends on June 25th, 2:00 PM UTC.

Categories: FLOSS Project Planets

Robin Wilson: How to install the Python triangle package on an Apple Silicon Mac

Planet Python - Tue, 2024-05-28 05:53

I was recently trying to set up RasterVision on my Apple Silicon Mac (specifically a M1 MacBook Pro, but I’m pretty sure this applies to any Apple Silicon Mac). It all went fine until it came time to install the triangle package, when I got an error. The error output is fairly long, but the key part is the end part here:

triangle/core.c:196:12: fatal error: 'longintrepr.h' file not found #include "longintrepr.h" ^~~~~~~~~~~~~~~ 1 error generated. error: command '/usr/bin/clang' failed with exit code 1 [end of output]

It took me quite a bit of searching to find the answer (Google just isn’t very good at giving relevant results these days), but actually it turns out to be very simple. The latest version of triangle on PyPI doesn’t work on Apple Silicon, but the code in the Github repository does work, so you can install directly from Github with this command:

pip install git+https://github.com/drufat/triangle.git

and it should all work fine.

Once you’ve done this, install rastervision again and it should recognise that the triangle package is already installed and not try to install it again.

Categories: FLOSS Project Planets

Call for Papers – Qt World Summit 2025 in Munich

Planet KDE - Tue, 2024-05-28 04:57

 

Qt World Summit is back and bigger than ever! We are looking for speakers, collaborators, and industry thought leaders to share their expertise and thoughts at the upcomingQt World Summit on May 6-7th, 2025 in Munich, Germany. 

*Please note we are looking for live talks only. 

Categories: FLOSS Project Planets

Specbee: How CKEditor 5 is transforming content authoring experience in Drupal 10

Planet Drupal - Tue, 2024-05-28 01:50
When the editing tools are intuitive, content creators can channel their energy into what truly counts-producing great content. User-friendly content management systems help them save time, reduce frustration, and streamline their editing process. Drupal strives to enhance user experience for both technical and non-technical users with every new update. With the latest version, Drupal 10, content authors can now focus on enhancing their productivity and creating better content for their audience. A standout feature in this release is CKEditor 5, now integrated into the core. This means it's available right out of the box!   In this blog, we're diving into the content editing powers of CKEditor 5 in Drupal 10. So buckle up, because we're about to take your content creation game to the next level. Redefining Content Editing Experience with Drupal 10’s CKEditor 5 CKEditor 5 has brought in many new additional features and abilities compared to CKEditor 4 which not only offers a streamlined content editing process but also gives you the power to make your content more engaging and appealing. Adding links, media, creating tables, etc. is now quicker and easier with CKEditor 5 in Drupal 10. Here's an overview of what the upgraded CKEditor module brings to the table: Revamped WYSIWYG Editor - A contemporary and user-centric interface, Drupal 10’s latest WYSIWYG editor prioritizes intuitiveness, featuring enhanced toolbar options, a more adaptable layout, and a streamlined interface. Streamlined Inline Editing - Edit content directly from the front end of your website without having to navigate to the back end. Collaborative Work Features (Premium Feature) - You can collaborate among multiple users on the same content now. With the collaboration features, you can track changes, comment on the content, check content revision history, and more! Now, let’s understand what are the content editing benefits of the additional features in CKEditor 5 in detail. Modern User Interface Compared to the somewhat outdated interface of Drupal 7, Drupal 10 offers a sleek and intuitive user interface where you can streamline content editing workflows. While new users may find using Drupal 7 overwhelming, Drupal 10 offers a more user-friendly UI, making content creation and editing more accessible and efficient. You now get a refined user experience with simplified improvements to interface colors, icons, toolbar items mechanics, and the theme. You can select among three UI options: Classic - It allows you to edit with a fixed toolbar without any interference in content editing. Balloon - This option offers a floating toolbar to allow you to edit content in any location. Inline - The inline display option displays the toolbar when you focus the editor. Media Widgets and Dedicated Toolbar With CKEditor 5 in Drupal 10, you get to experience enhanced media management with new media widgets and a dedicated toolbar. These tools provide a streamlined interface for adding and managing media content, making it easier to embed images, videos, and other media directly within content. New Styles Dropdown The new styles dropdown feature in CKEditor 5 allows content editors to apply predefined styles to text and elements seamlessly. This dropdown is integrated into the text editor toolbar, offering a user-friendly way to ensure consistent styling across content without requiring HTML or CSS knowledge. Easy Tables with Quick Dropdown CKeditor 5 in D10 simplifies table creation and management with an easy tables feature. This includes a quick dropdown menu within the text editor, enabling users to insert, customize, and format tables efficiently. This enhancement helps you maintain data organization and presentation quality. Balloon Panels Balloon panels are a new addition that provides contextual tooltips and editing options directly within the text editor. These panels appear as floating, interactive elements, making it easier to access relevant tools and information without navigating away from the content. Plus, it’s much more intuitive and mobile-friendly. Insert Links and Special Characters When you select the “link” button on the toolbar, a balloon panel for inserting links will appear, resembling the one used for adding alternative text. This panel features a clean and contemporary design, with a green checkmark to confirm the link entered in the “Link URL” field and a red cross to cancel the action. The updated special characters dropdown allows content editors to insert various symbols, including special letters, mathematical symbols, currency signs, copyright symbols, trademark symbols, and more. The Material Icons Module Google's Material Icons collection offers a range of simple, contemporary icons. The Material Icons module allows you to choose from style families like Baseline, Outlined, Two-Tone, Round, and Sharp. Baseline style is activated by default, but you can enable other styles in the module settings at Configuration > Content Authoring > Material Icons. To include the Material Icons button in the CKEditor 5 toolbar, navigate to Configuration > Content authoring > Text formats and editors, select your preferred format, and drag the Material Icons button from “Available” to “Active.” Once added to the toolbar, you can search for icons by name, choose the style family from the dropdown, and apply optional classes. The autocomplete feature and a link to the full icon list simplify the process of finding the desired icons. The Editor Advanced Link Module The Enhanced Link Editor module enriches the CKEditor 5 link dialog box with additional options for incorporating link attributes. Version 2.1.1 is compatible with CKEditor 5. Following installation, navigate to Configuration > Content authoring > Text formats and editors. Choose the input format and locate the CKEditor 5 plugins list. Activate the "Enhanced links" plugin and designate attributes such as: ARIA label Title CSS classes ID Open in a new window (target attribute) Link relationship  Activate these attributes by ticking the boxes and then, save the configuration. Subsequently, these attributes become accessible during the creation or modification of links within the content editor. Better Lists Feature The lists feature improves the creation and management of ordered and unordered lists. Enhanced list formatting options allow for more control over list appearance, making it easier to create structured and visually appealing content. Autoformatting & Transformations Autoformatting allows content editing without needing to use toolbar buttons. You can swiftly create lists or format text by using simple typing shortcuts. Transformations enable the automatic creation of symbols using shortcut text, like generating a copyright symbol by typing (C). You can also set up auto-correct rules using this feature. TypeScript The new CKEditor will soon support official TypeScript for its entire API. This setup will offer content admins several benefits, such as producing clean, high-quality, maintainable code and providing code autocompletion and type suggestions for CKEditor APIs. Final Thoughts Drupal 10 introduced a range of improvements, like an upgraded content editing experience with CKEditor 5, a more modular architecture, better performance, and scalability. For content creators, Drupal 10’s CKEditor 5 is a game-changer. You can whip up killer user experiences quicker and slicker than ever before. Thinking about migrating to Drupal 10? That's not just a good idea-it's a strategic move! Your audience will thank you for it with faster load times, smoother navigation, and a website packed with more features than a Texas barbecue. Did you know we’re Certified Drupal Migration Partners? Reach out to us today to find out how we can help you.
Categories: FLOSS Project Planets

Malayalam open font design competition announced

Planet KDE - Tue, 2024-05-28 00:52

Rachana Institute of Typography, in association with KaChaTaThaPa Foundation and Sayahna Foundation, is launching a Malayalam font design competition for students, professionals, and amateur designers.

Selected fonts will be published under Open Font License for free use.

It is not necessary to know details of font technology; skills to design characters would suffice.

Timelines, regulations, prizes and more details are available at the below URLs.

English: https://sayahna.net/fcomp-en
Malayalam: https://sayahna.net/fcomp-ml

Registration

Interested participants may register at https://sayahna.net/fcomp

Last day for registration is 30th June 2024.

Categories: FLOSS Project Planets

Sahil Dhiman: A Late, Late Debconf23 Post

Planet Debian - Mon, 2024-05-27 14:00

After much procrastination, I have gotten around to completing DebConf23 (DC23), Kochi blog post. I kind of lost the original etherpad which started before DebConf23, for jotting down things. So I started afresh with whatever I can remember, months after the actual conference ended. So things might be as accurate as my memory.

DebConf23, was the 24th annual Debian Conference, happened in Infopark, Kochi, India from 10th September to 17th September 2023. It was preceded by DebCamp from 3rd September to 9th September 2023.

The first formal bid to host DebConf in India was made during DebConf18 in Hsinchu, Taiwan by Raju Dev, which didn’t came our way. In next DebConf, DebConf19 in Curitiba, Brazil, with help and support from Sruthi, Utkarsh and the whole team, India got the opportunity to host DebConf22, which eventually became DebConf23 for the reasons you all know.

I initially met the local team on the sidelines of DebConf20, which was also my first DebConf. DC20 introduced me to how things work in Debian. Having recently switched to Debian and video teams called for volunteer email pulled me in. Things stuck, and I kept hanging out and helping the local Indian DC team with various stuff. We did manage to organize multiple events leading to DebConf23 including MiniDebConf India 2021 Online, MiniDebConf Palakkad 2022, MiniDebConf Tamil Nadu 2023 and DebUtsav Kochi 2023, which gave us quite a bit of experience and workout. Many local organizers from these conferences later joined various DebConf teams during the conference to help out.

For DebConf23, originally, I was part of publicity team because that was my usual thing, but after a team redistribution exercise, Sruthi and Praveen moved me to sponsorship team, as anyhow we didn’t have to do much publicity and sponsorship was one of those things I could get involved remotely. Sponsorship team had to take care of raising funds by reaching out to sponsors, managing invoices and fulfillment. Praveen joined as well in sponsorship team. We had help from international sponsorship team, Anisa, Daniel and various TOs which took care of reaching out to international orgs, and we took care of reaching out to Indian organizations for sponsorship. It was really proud moment when my present employer, Unmukti (makers of hopbox) came aboard as Bronze sponsor. Though fundraising seem to be hit hard from tech industry slowdown and layoffs. Many of our yesteryear sponsors couldn’t sponsor.

We had biweekly local team meetings, which were turned to weekly as we neared the event. This was done in addition to bi-weekly global team meeting.

Pathu, DebConf23 mascot

To describe the venue, the conference happened in InfoPark, Kochi with the main conference hall being Athulya Hall and food, accommodation and two smaller halls in Four Point Hotel, right outside Infopark. We got the Athulya Hall as part of venue sponsorship from Infopark. The distance between both of them was around 300 meters. Halls were named Anamudi, Kuthiran and Ponmudi based on hills and mountain areas in host state of Kerala. Other than Annamudi hall which was the main hall, I couldn’t remember the names of the hall, I still can’t. Four Points was big and expensive, and we had, as expected, cost overruns. Due to how DebConf function, an Indian university wasn’t suitable to host a conference of this scale.

Four Point's Infinity Pool at Night

I landed in Kochi on the first day of DebCamp on 3rd September. As usual, met Abraham first, and the better part of the next hour was spent on meet and greet. It was my first IRL DebConf so met many old friends and new folks. I got a room to myself. Abraham lived nearby and hadn’t taken the accommodation, so I asked him to join. He finally joined from second day onwards. All through the conference, room 928 became in-famous for various reasons, and I had various roommates for company. In DebCamp days, we would get up to have breakfast and go back to sleep and get active only past lunch for hacking and helping in the hack lab for the day, followed by fun late night discussions and parties.

Nilesh, Chirag and Apple at DC23

The team even managed to get a press conference arranged as well, and we got an opportunity to go to Press Club, Ernakulam. Sruthi and Jonathan gave the speech and answered questions from journalists. The event was covered by media as well due to this.

Ernakulam Press Club

Every night, the team use to have 9 PM meetings for retrospection and planning for next day, which was always dotted with new problems. Every day, we used to hijack Silent Hacklab for the meeting and gently ask the only people there at the time to give us space.

DebConf, it itself is a well oiled machine. Network was brought up from scratch. Video team build the recording, audio mixing, live-streaming, editing and transcoding infrastructure was built on site. A gaming rig served as router and gateway. We got a dual internet connection, a 1 Gbps sponsored leased line from Kerala Vision and a paid backup 100 Mbps connection from a different provider. IPv6 was added through HE’s Tunnelbroker. Overall the network worked fine as additionally we had hotel Wi-Fi as well, so the conference network wasn’t stretched much. I must highlight, DebConf is my only conference where almost everything and every piece of software in developed in-house, for the conference and modified according to need on the fly. Even event recording cameras, audio check, direction, recording and editing is all done on in-house software by volunteers-attendees (in some cases remote ones as well) all trained on the sideline of the conference. The core recording and mixing equipment is owned by Debian and travels to each venue. The rest is sourced locally.

Gaming Rig which served as DC23 gateway router

It was fun seeing how almost all the things were coordinates over text on IRC. If a talk/event was missing a talkmeister or a director or a camera person, a quick text on #debconf channel would be enough for someone to volunteer. Video team had a dedicated support channel for each conference venue for any issues and were quick to respond and fix stuff.

Network information. Screengrab from closing ceremony

It rained for the initial days, which gave us a cool weather. Swag team had decided to hand out umbrella’s in swag kit which turned out to be quite useful. The swag kit was praised for quality and selection - many thanks to Anupa, Sruthi and others. It was fun wearing different color T-shirts, all designed by Abraham. Red for volunteers, light green for Video team, green for core-team i.e. staff, yellow for conference attendees.

With highvoltage

We were already acclimatized by the time DebConf really started as we had been talking, hacking and hanging out since last 7 days, but rush really started with the start of DebConf. More people joined on the first and second day of the conference. As has been the tradition, an opening talk was prepared by the Sruthi and local team (which I highly recommend getting more insights of the process). DebConf day 1 also saw Job fair, where Canonical and FOSSEE, IIT Bombay had stalls for community interactions, which judging by the crowd itself turned out to be quite a hit.

For me, association with DebConf (and Debian) started due to volunteering with video team, so anyhow I was going to continue doing that this conference as well. I usually volunteer for talks/events which anyhow I’m interested in. Handling the camera, talkmeister-ing and direction are fun activities, though I didn’t do sound this time around. Sound seemed difficult, and I didn’t want to spoil someone’s stream and recording. Talk attendance varied a lot, like in Bits from DPL talk, the hall was full but for some there were barely enough people to handle the volunteering tasks, but that’s what usually happens. DebConf is more of a place to come together and collaborate, so talk attendance is an afterthought sometimes.

Audience in highvoltage's Bits from DPL talk

I didn’t submit any talk proposals this time around, as just being in the orga team was too much work already, and I knew, the talk preparation would get delayed to the last moment and I would have to rush through it.

Enrico's talk

From Day 2 onward, more sponsor stalls were introduced in the hallway area. Hopbox by Unmukti , MostlyHarmless and Deeproot (joint stall) and FOSEE. MostlyHarmless stall had nice mechanical keyboards and other fun gadgets. Whenever I got the time, I would go and start typing racing to enjoy the nice, clicky keyboards.

As the DebConf tradition dictates, we had a Cheese and Wine party. Everyone brought in cheese and other delicacies from their region. Then there was yummy Sadya. Sadya is a traditional vegetarian Malayalis lunch served over banana leaves. There were loads of different dishes served, the names of most I couldn’t pronounce or recollect properly, but everything was super delicious.

Day four was day trip and I choose to go to Athirappilly Waterfalls and Jungle safari. Pictures would describe the beauty better than words. The journey was a bit long though.

Athirappilly Falls
Tea Gardens

Late that day, we heard the news of Abraham gone missing. We lost Abraham. He had worked really hard all through the years for Debian and making this conference. Talks were cancelled for the next day and Jonathan addressed everyone. We went to Abraham’s home the next day to meet his family. Team had arranged buses to Abraham’s place. It was an unfortunate moment that I only got an opportunity to visit his place after he was gone.

Days went by slowly after that. The last day marked by a small conference dinner. Some of the people had already left. All through the day and next, we kept saying goodbye to friends, with whom we spent almost a fortnight together.

Group photo with all DebConf T-shirts chronologically

This was 2nd trip to Kochi. Vistara Airway’s UK886 has become the default flight now. Almost learned how to travel in and around Kochi by Metro, Water Metro, Airport Shuttle and auto. Things are quite accessible in Kochi but metro is a bit expensive compared to Delhi. I left Kochi on 19th. My flight out was due to leave around 8 PM, so I had the whole day and nothing to do. A direct option would have taken less than 1 hour, but as I had time, I choose to take the long way to the airport. Took an auto rickshaw to Kakkanad Water Metro station. Took the water metro to Vyttila Water Metro station. Vyttila serves as intermobility hub which connects water metro, metro, bus at once place. I switched to Metro here at Vyttila Metro station till Aluva Metro station. Here, I had lunch and then boarded the Airport feeder bus to reach Kochi Airport. All in all, I did auto rickshaw > water metro > metro > feeder bus to reach Airport. I was fun and scenic. I must say, public transport and intermodal integration is quite good and once can transition seamlessly from one mode to next.

Kochi Water Metro
Scenes from Kochi Water Metro

DebConf23 served its purpose of getting existing Debian people together, as well as getting new people interested and contributing to Debian. People who came are still contributing to Debian, and that’s amazing.

Streaming video stats. Screengrab from closing ceremony

The conference wasn’t without its fair share of trouble. There were multiple money transfer woes, and being in India didn’t help. Many thanks to multiple organizations who were proactive in helping out. On top of this, there was conference visa uncertainty and other issues which troubled visa team a lot.

Kudos to everyone who made this possible. Surely, I’m going to miss the name, so thank you for it, you know how much you have done to make this event possible.

Now, DebConf24 is scheduled for Busan, South Korea, and work is already in full swing. As usual, I’m helping with the fundraising part and plan to attended too. Let’s see if I can make it or not.

DebConf23 Group Photo. Click to enlarge.
Credit - Aigars Mahinovs

In the end, we kept on saying, no DebConf at this scale would come back to India for the next 10 or 20 years. It’s too much trouble to be frank. It was probably the peak that we might not reach again. I would be happy to be proven wrong though :)

Categories: FLOSS Project Planets

Talking Drupal: Talking Drupal #452 - Starshot & Experience Builder

Planet Drupal - Mon, 2024-05-27 14:00

Today we are talking about web design and development, from a group of people with one thing in common… We love Drupal. This is episode #452 Starshot & Experience Builder.

For show notes visit: www.talkingDrupal.com/452

Topics
  • What is Starshot
  • What is Experience builder
  • How will Starshot build on Drupal Core
  • Will Experience builder be added to Core
  • Listener thejimbirch:
    • When will people hear about their pledge
  • Listener brook_heaton:
    • Will experience builder be compatible with layout builder
  • Will Experience builder allow people to style content
  • Listener Matthieu Scarset
    • Who is Starshot trying to compete with
  • Listener Andy Blum
    • Does the DA or other major hosting companies plan to set up cheap, easy hosted Drupal
  • Listener Ryan Szarma
    • Who does this initiative serve in the business community
  • How can people get involved
Resources Guests

Lauri Eskola - lauriii

Hosts

Nic Laflin - nLighteneddevelopment.com nicxvan John Picozzi - epam.com johnpicozzi Matthew Grasmick - grasmash

MOTW Correspondent

Martin Anderson-Clutz - mandclu.com mandclu

  • Brief description:
    • Have you ever wanted to have your modules create content when they’re installed? There’s a module for that.
  • Module name/project name:
  • Brief history
    • How old: created in Oct 2015 by prolific contributor Lee Rowlands (larowlan) though the most recent releases are by Sascha Grossenbacher (Berdir), also a maintainer of many popular Drupal modules
    • Versions available: 2.0.0-alpha2, which works with Drupal 9 and 10
  • Maintainership
    • Security coverage: opted in, but needs a stable release
    • Test coverage
    • Documentation
    • Number of open issues: 105 open issues, 29 of which are bugs against the current branch
  • Usage stats:
    • Almost 20,000 sites
  • Module features and usage
    • Provides a way for modules to include default content, in the same way that many modules already include default configuration
    • The module exports content as YAML files, and your module can specify the content that should be exported by listing the UUIDs in the info.yml file
    • It also provides a number of drush commands, to export a single entity, to export an entity and all of its dependencies, or to bulk export all of the content referenced in a module’s .info.yml file
    • There is also a companion project to export default content using an action within a view, which also makes me think it could probably be automated with something like ECA if you needed that
    • Exported content should be kept in a content directory in your module, where it will imported during install on any site that has the default_content module installed
    • I thought this would be a good module to cover today because Drupal core’s recipe system also includes support for default content, so when you install a recipe it will similarly import any YAML-encoded content in the recipe. In fact, I used this module for the first time exporting taxonomy terms I wanted a recipe to create as default values for a taxonomy it creates. Since Recipes will be a big part of Starshot, I expect default_content to be getting a lot of use in the coming months
Categories: FLOSS Project Planets

ADCI Solutions: Field mapping when integrating Drupal with Salesforce

Planet Drupal - Mon, 2024-05-27 10:41
<p>The existing module for Drupal integration with Salesforce was not a good fit for this client's needs. For this integration, we had to <a href="https://www.adcisolutions.com/work/field-mapping?utm_source=planetdrupal%26utm_medium=rss_feed%26utm_campaign=field-mapping">set up field mapping</a>.</p><img data-entity-uuid="92901548-1f79-4601-b01e-c10cbea1ab6e" data-entity-type="file" src="https://www.adcisolutions.com/sites/default/files/inline-images/salesforce-drupal-integration_0.png" width="2100" height="1336" alt="field mapping"><p>&nbsp;</p><p>&nbsp;</p><p>&nbsp;</p>
Categories: FLOSS Project Planets

Real Python: How to Create Pivot Tables With pandas

Planet Python - Mon, 2024-05-27 10:00

A pivot table is a data analysis tool that allows you to take columns of raw data from a pandas DataFrame, summarize them, and then analyze the summary data to reveal its insights.

Pivot tables allow you to perform common aggregate statistical calculations such as sums, counts, averages, and so on. Often, the information a pivot table produces reveals trends and other observations your original raw data hides.

Pivot tables were originally implemented in early spreadsheet packages and are still a commonly used feature of the latest ones. They can also be found in modern database applications and in programming languages. In this tutorial, you’ll learn how to implement a pivot table in Python using pandas’ DataFrame.pivot_table() method.

Before you start, you should familiarize yourself with what a pandas DataFrame looks like and how you can create one. Knowing the difference between a DataFrame and a pandas Series will also prove useful.

In addition, you may want to use the data analysis tool Jupyter Notebook as you work through the examples in this tutorial. Alternatively, JupyterLab will give you an enhanced notebook experience, but feel free to use any Python environment you wish.

The other thing you’ll need for this tutorial is, of course, data. You’ll use the Sales Data Presentation - Dashboards data, which is freely available for you to use under the Apache 2.0 License. The data has been made available for you in the sales_data.csv file that you can download by clicking the link below.

Get Your Code: Click here to download the free sample code you’ll use to create a pivot table with pandas.

This table provides an explanation of the data you’ll use throughout this tutorial:

Column Name Data Type (PyArrow) Description order_number int64 Order number (unique) employee_id int64 Employee’s identifier (unique) employee_name string Employee’s full name job_title string Employee’s job title sales_region string Sales region employee works within order_date timestamp[ns] Date order was placed order_type string Type of order (Retail or Wholesale) customer_type string Type of customer (Business or Individual) customer_name string Customer’s full name customer_state string Customer’s state of residence product_category string Category of product (Bath Products, Gift Basket, Olive Oil) product_number string Product identifier (unique) product_name string Name of product quantity int64 Quantity ordered unit_price double Selling price of one product sale_price double Total sale price (unit_price × quantity)

As you can see, the table stores data for a fictional set of orders. Each row contains information about a single order. You’ll become more familiar with the data as you work through the tutorial and try to solve the various challenge exercises contained within it.

Throughout this tutorial, you’ll use the pandas library to allow you to work with DataFrames and the newer PyArrow library. The PyArrow library provides pandas with its own optimized data types, which are faster and less memory-intensive than the traditional NumPy types pandas uses by default.

If you’re working at the command line, you can install both pandas and pyarrow using python -m pip install pandas pyarrow, perhaps within a virtual environment to avoid clashing with your existing environment. If you’re working within a Jupyter Notebook, you should use !python -m pip install pandas pyarrow. With the libraries in place, you can then read your data into a DataFrame:

Python >>> import pandas as pd >>> sales_data = pd.read_csv( ... "sales_data.csv", ... parse_dates=["order_date"], ... dayfirst=True, ... ).convert_dtypes(dtype_backend="pyarrow") Copied!

First of all, you used import pandas to make the library available within your code. To construct the DataFrame and read it into the sales_data variable, you used pandas’ read_csv() function. The first parameter refers to the file being read, while parse_dates highlights that the order_date column’s data is intended to be read as the datetime64[ns] type. But there’s an issue that will prevent this from happening.

In your source file, the order dates are in dd/mm/yyyy format, so to tell read_csv() that the first part of each date represents a day, you also set the dayfirst parameter to True. This allows read_csv() to now read the order dates as datetime64[ns] types.

With order dates successfully read as datetime64[ns] types, the .convert_dtypes() method can then successfully convert them to a timestamp[ns][pyarrow] data type, and not the more general string[pyarrow] type it would have otherwise done. Although this may seem a bit circuitous, your efforts will allow you to analyze data by date should you need to do this.

If you want to take a look at the data, you can run sales_data.head(2). This will let you see the first two rows of your dataframe. When using .head(), it’s preferable to do so in a Jupyter Notebook because all of the columns are shown. Many Python REPLs show only the first and last few columns unless you use pd.set_option("display.max_columns", None) before you run .head().

If you want to verify that PyArrow types are being used, sales_data.dtypes will confirm it for you. As you’ll see, each data type contains [pyarrow] in its name.

Note: If you’re experienced in data analysis, you’re no doubt aware of the need for data cleansing. This is still important as you work with pivot tables, but it’s equally important to make sure your input data is also tidy.

Tidy data is organized as follows:

  • Each row should contain a single record or observation.
  • Each column should contain a single observable or variable.
  • Each cell should contain an atomic value.

If you tidy your data in this way, as part of your data cleansing, you’ll also be able to analyze it better. For example, rather than store address details in a single address field, it’s usually better to split it down into house_number, street_name, city, and country component fields. This allows you to analyze it by individual streets, cities, or countries more easily.

In addition, you’ll also be able to use the data from individual columns more readily in calculations. For example, if you had columns room_length and room_width, they can be multiplied together to give you room area information. If both values are stored together in a single column in a format such as "10 x 5", the calculation becomes more awkward.

The data within the sales_data.csv file is already in a suitably clean and tidy format for you to use in this tutorial. However, not all raw data you acquire will be.

It’s now time to create your first pandas pivot table with Python. To do this, first you’ll learn the basics of using the DataFrame’s .pivot_table() method.

Get Your Code: Click here to download the free sample code you’ll use to create a pivot table with pandas.

Take the Quiz: Test your knowledge with our interactive “How to Create Pivot Tables With pandas” quiz. You’ll receive a score upon completion to help you track your learning progress:

Interactive Quiz

How to Create Pivot Tables With pandas

This quiz is designed to push your knowledge of pivot tables a little bit further. You won't find all the answers by reading the tutorial, so you'll need to do some investigating on your own. By finding all the answers, you're sure to learn some other interesting things along the way.

How to Create Your First Pivot Table With pandas

Now that your learning journey is underway, it’s time to progress toward your first learning milestone and complete the following task:

Calculate the total sales for each type of order for each region.

Read the full article at https://realpython.com/how-to-pandas-pivot-table/ »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Maui Report 23

Planet KDE - Mon, 2024-05-27 09:47

Today, we bring you a new report on the Maui Project’s progress after our previous 3.1.0 release, and the last one based on Qt5 – Here you will find detailed information on the new features, bug fixes, and improvements that have been made to the set of apps, frameworks, and shell environment.

To follow the Maui Project’s development or say hi, you can join us on Telegram @mauiproject.

We are present on Twitter and Mastodon:

Maui4 Apps

The complete set of Maui Applications has been fully ported to Qt6. In the migration process, some features have been disabled or removed; and after finalizing the initial porting, the work efforts are now placed on fixing typos, and new bugs introduced, making sure all the features are working correctly. More detailed information about that will be covered and listed below.

The previous stable release – 3.1.0 – is the last one based on Qt5 and MauiKit3. Although a new stable release was scheduled by this time, instead of a new stable version we present to you a beta release of the MauiKit4 Apps and Frameworks, fully based on Qt6 and KF6. A stable release is planned to be out for August 2024.

The ported versions of all the apps can be found in the qt6 branches, and testing packages will be published as they become available.

Porting & Pending

Even though all of the Maui Apps have now been ported there is still pending work to review that all the features are working correctly and there are no regressions introduced.

List of notable changes:

  • Vvave lost support for streaming remote files stored in NextCloud and gained mini-mode support.
  • Index application has been fully ported, Pix, Buho, Nota, Station, and all the other apps.
  • Arca has been ported, and the archive manager has been moved into a framework to be shared, for example, in Index and Shelf.
  • Shelf is missing Comic book support coming from MauiKit-Documents, which is being refactored.

Another area of work is on the newest set of apps, aiming to reach parity of features with the older ones.

Note! The MauiKit4 apps have not yet been tested under Android, thus there is not yet a testing APK build. APK testing packages will be published on the Maui Telegram channel once they start becoming available.

https://x.com/cmhiguita/status/1785465786930184697

MauiKit4 Frameworks Porting & Documentation

The porting of the frameworks has been finalized. All the frameworks are now only Qt6 compatible – thus Qt5 support has been dropped, and the last stable Qt5 version will remain at 3.1.0.

The only pending framework for completing the documentation is MauiKit-Documents, and the newly introduced MauiKit-Archiver.

The following is a list of fixes and new features introduced:

  • Fixes to the Documents framework for opening locked PDF documents, and initial support for searching text
  • MauiKit4 fine-tuning all the controls implementation and fixing small bugs all around.
Pending
  • TextEditor is pending to be ported to a more powerful text rendering/layout engine
  • Documents comic book support is to be refactored to solve Android crashing issues on multithreading
  • Three new frameworks are still pending for a stable release MauiKit-SDK, MauiKit-Git, and MauiKit-Archiver. Arca, the archive file manager, is now to be ported to be using MauiKit-Archiver, and Index as well.

Maui Shell

Maui Shell and its accompanying projects have long been ported over to Qt6, such as CaskServer, Maui-Settings. However, in the porting, a lot of small details broke and need some love and fixing, which takes us to the roadmap plans of making the first stable release before this year ends. So around November, the first stable release should be out.

 

And that’s all for this report.

New release schedule

The post Maui Report 23 appeared first on MauiKit — #UIFramework.

Categories: FLOSS Project Planets

LN Webworks: How to Fix Drupal Issues with Git Patches Using 'git apply patch' Command

Planet Drupal - Mon, 2024-05-27 05:44

Have you ever faced any problems on your Drupal websites coming from Drupal core, its contributed modules and themes or do you want to enhance their functionality as per your requirement which can't be possible through your custom modules? As you know we can't apply code directly there, then what could be the solution for this?  Well, patching those codes might just be the solution you're looking for.

In this blog, we'll talk about the process of creating and applying patches using git diff and git apply commands and we will also apply patches through composer install command. By applying patches you can achieve all your requirements before this you must have good coding skills to understand Drupal code.

Categories: FLOSS Project Planets

Python Bytes: #385 RESTing on Postgres

Planet Python - Mon, 2024-05-27 04:00
<strong>Topics covered in this episode:</strong><br> <ul> <li><a href="https://github.com/PostgREST/postgrest">PostgresREST</a></li> <li><a href="https://jacobpadilla.com/articles/recreating-asyncio"><strong>How Python Asyncio Works: Recreating it from Scratch</strong></a></li> <li><a href="https://higherorderco.com">Bend</a></li> <li><a href="https://leanpub.com/regexpython/">The Smartest Way to Learn Python Regular Expressions</a></li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=f-tuQBIn1fQ' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="385">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by Mailtrap: <a href="https://pythonbytes.fm/mailtrap"><strong>pythonbytes.fm/mailtrap</strong></a></p> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy"><strong>@mkennedy@fosstodon.org</strong></a></li> <li>Brian: <a href="https://fosstodon.org/@brianokken"><strong>@brianokken@fosstodon.org</strong></a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes"><strong>@pythonbytes@fosstodon.org</strong></a></li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually Tuesdays at 10am PT. Older video versions available there too.</p> <p>Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it.</p> <p><strong>Michael #1:</strong> <a href="https://github.com/PostgREST/postgrest">PostgresREST</a></p> <ul> <li>PostgREST serves a fully RESTful API from any existing PostgreSQL database. It provides a cleaner, more standards-compliant, faster API than you are likely to write from scratch.</li> <li>Speedy <ul> <li>First the server is written in <a href="https://www.haskell.org/">Haskell</a> using the <a href="http://www.yesodweb.com/blog/2011/03/preliminary-warp-cross-language-benchmarks">Warp</a> HTTP server (aka a compiled language with lightweight threads). </li> <li>Next it delegates as much calculation as possible to the database.</li> <li>Finally it uses the database efficiently with the <a href="https://nikita-volkov.github.io/hasql-benchmarks/">Hasql</a> library</li> </ul></li> <li>PostgREST <a href="http://postgrest.org/en/stable/auth.html">handles authentication</a> (via JSON Web Tokens) and delegates authorization to the role information defined in the database. This ensures there is a single declarative source of truth for security.</li> </ul> <p><strong>Brian #2:</strong> <a href="https://jacobpadilla.com/articles/recreating-asyncio"><strong>How Python Asyncio Works: Recreating it from Scratch</strong></a></p> <ul> <li>Jacob Padilla</li> <li>Cool tutorial walking through how async works, including <ul> <li>Generators Review</li> <li>The Event Loop</li> <li>Sleeping</li> <li>Yield to Await</li> <li>Await with AsyncIO</li> </ul></li> <li>Another great async resource is: <ul> <li><a href="https://www.youtube.com/watch?v=Y4Gt3Xjd7G8">Build your Own Async</a> <ul> <li>David Beasley talk from 2019</li> </ul></li> </ul></li> </ul> <p><strong>Michael #3:</strong> <a href="https://higherorderco.com">Bend</a></p> <ul> <li>A massively parallel, high-level programming language.</li> <li>With <strong>Bend</strong> you can write parallel code for multi-core CPUs/GPUs without being a C/CUDA expert with 10 years of experience. </li> <li>It feels just like Python!</li> <li>No need to deal with the complexity of concurrent programming: locks, mutexes, atomics... <strong>any</strong> work that can be done in parallel <strong>will</strong> be done in parallel.</li> </ul> <p><strong>Brian #4:</strong> <a href="https://leanpub.com/regexpython/">The Smartest Way to Learn Python Regular Expressions</a></p> <ul> <li>Christian Mayer, Zohaib Riaz, and Lukas Rieger</li> <li>Self published ebook on Python Regex that utilizes <ul> <li>book form readings, links to video course sections</li> <li>puzzle challenges to complete online</li> </ul></li> <li>It’s a paid resource, but the min is free.</li> </ul> <p><strong>Extras</strong> </p> <p>Brian:</p> <ul> <li><a href="https://www.jordanmechner.com/en/books/replay">Replay</a> - A graphic memoir by Prince of Persia creator Jordan Mechner, recounting his own family story of war, exile and new beginnings.</li> </ul> <p>Michael:</p> <ul> <li><a href="https://en.wikipedia.org/wiki/Python_Conference">PyCon 2026</a></li> </ul> <p><strong>Joke:</strong> Shells Scripts</p>
Categories: FLOSS Project Planets

Zato Blog: Web scraping as an API service

Planet Python - Mon, 2024-05-27 04:00
Web scraping as an API service 2024-05-27, by Dariusz Suchojad Overview

In systems-to-systems integrations, there comes an inevitable time when we have to employ some kind of a web scraping tool to integrate with a particular application. Despite its not being our first choice, it is good to know what to use at such a time - in this article, I provide a gentle introduction to my favorite tool of this kind, called Playwright, followed by sample Python code that integrates it with an API service.

Naturally, in the context of backend integrations, web scraping should be avoided and, generally, it should be considered the last resort. The basic issue here is that while the UI term contains the "interface" part, it is not really the "Application Programming" Interface that we would like to have.

It is not that the UI cannot be programmed against. After all, a web browser does just that, it takes a web page and renders it as expected. Same goes for desktop or mobile applications. Also, anyone integrating with mainframe computers will recognize that this is basically what 3270 can be used for too.

Rather, the fundamental issue is that web scraping goes against the principles of separation of layers and roles across frontend, middleware and backend, which in turn means that authors of resources (e.g. HTML pages) do not really expect for many people to access them in automated ways.

Perhaps they actually should expect it, and web pages should finally start to resemble genuine knowledge graphs, easy to access by humans, be it manually or through automation tools, but the reality today is that it is not the case and, in comparison with backend systems, the whole of the web scraping space is relatively brittle, which is why we shun this approach in integrations.

Yet, another part of reality, particularly in enterprise integrations, is that people may be sometimes given access to a frontend application on an internal network and that is it. No API, no REST, no JSON, no POST data, no real data formats, and one is simply supposed to fill out forms as part of a business process.

Typically, such a situation will result in an integration gap. There will be fully automated parts in the business process preceding this gap, with multiple systems coordinated towards a specific goal and there will be subsequent steps in the process, also fully automated.

Or you may be given access only to a specific frontend and only through VPN via a single remote Windows desktop. Getting access to a REST API may take months or may be never realized because of some high level licensing issues. This is not uncommon in the real life.

Such a gap can be a jarring and sore point, truly ruining the whole, otherwise fluid, integration process. This creates a tension and to resolve the tension, we can, should all the attempts to find a real API fail, finally resort to web scraping.

It is mostly in this context that I am looking at Playwright below - the tool is good and it has many other uses that go beyond the scope of this text, and it is well worth knowing it, for instance for frontend testing of your backend systems, but, when we deal with API integrations, we should not overdo with web scraping.

Needless to say, if web scraping is what you do primarily, your perspective will be somewhat different - you will not need any explanation of why it is needed or when, and you may be only looking for a way to enclose up your web scraping code in API services. This article will explain that too.

Introducing Playwright

The nice part of Playwright is that we can use it to visually prepare a draft of Python code that will scrape a given resource. That is, instead of programming it in Python, we go to an address, fill out a form, click buttons and otherwise use everything as usually and Playwright generates for us code that will be later used in integrations.

That code will require a bit of clean-up work, which I will talk about below, but overall it works very nicely and is certainly useful. The result is not one of these do-not-touch auto-generated pieces of code that are better left to their own.

While there are better ways to integrate with Jira, I chose that application as an example of Playwright's usage simply because I cannot show you any internal application in a public blog post.

Below, there are two windows. One is Playwright's emulating a Blackberry device to open a resource. I was clicking around, I provided an email address and then I clicked the same email field once more. To the right, based on my actions, we can find the generated Python code, which I consider quite good and readable.

The Playwright Inspector, the tool that gave us the code, will keep recording all of our actions until we click the "Record" button which then allows us to click the button next to "Record" which is "Copy code to clipboard". We can then save the code to a separate file and run it on demand, automatically.

But first, we will need to install Playwright.

Installing and starting Playwright

The tools is written in TypeScript and can be installed using npx, which in turn is part of NodeJS.

Afterwards, the "playwright install" call is needed as well because that will potentially install runtime dependencies, such as Chrome libraries.

Finally, we install Playwright using pip as well because we want to access with Python. Note that if you are installing Playwright under Zato, the "/path/to/pip" will be typically "/opt/zato/code/bin/pip".

npx -g --yes playwright install playwright install /path/to/pip install playwright

We can now start it as below. I am using BlackBerry as an example of what Playwright is capable of. Also, it is usually more convenient to use a mobile version of a site when the main window and Inspector are opened side by side, but you may prefer to use Chrome, Firefox or anything else.

playwright codegen https://example.atlassian.net/jira --device "BlackBerry Z30"

That is practically everything as using Playwright to generate code in our context goes. Open the tool, fill out forms, copy code to a Python module, done.

What is still needed, though, is cleaning up the resulting code and embedding it in an API integration process.

Code clean-up

After you keep using Playwright for a while with longer forms and pages, you will note that the generated code tends to accumulate parts that repeat.

For instance, in the module below, which I already cleaned up, the same "[placeholder=\"Enter email\"]" reference to the email field is used twice, even if a programmer developing this could would prefer to introduce a variable for that.

There is not a good answer to the question of what to do about it. On the one hand, obviously, being programmers we would prefer not to repeat that kind of details. On the other hand, if we clean up the code too much, this may result in too much of a maintenance burden because we need to keep it mind that we do not really want to invest to much in web scraping and, should there be a need to repeat the whole process, we do not want to end up with Playwright's code auto-generated from scratch once more, without any of our clean-up.

A good compromise position is to at least extract any kind of credentials from the code to environment variables or a similar place and to remove some of the code comments that Playwright generates. The result as below is what it should like at the end. Not too much effort without leaving the whole code as it was originally either.

Save the code below as "play1.py" as this is what the API service below will use.

# -*- coding: utf-8 -*- # stdlib import os # Playwright from playwright.sync_api import Playwright, sync_playwright class Config: Email = os.environ.get('APP_EMAIL', 'zato@example.com') Password = os.environ.get('APP_PASSWORD', '') Headless = bool(os.environ.get('APP_HEADLESS', False)) def run(playwright: Playwright) -> None: browser = playwright.chromium.launch(headless=Config.Headless) # type: ignore context = browser.new_context() # Open new page page = context.new_page() # Open project boards page.goto("https://example.atlassian.net/jira/software/projects/ABC/boards/1") page.goto("https://id.atlassian.com/login?continue=https%3A%2F%2Fexample.atlassian.net%2Flogin%3FredirectCount%3D1%26dest-url%3D%252Fjira%252Fsoftware%252Fprojects%252FABC%252Fboards%252F1%26application%3Djira&application=jira") # Fill out the email page.locator("[placeholder=\"Enter email\"]").click() page.locator("[placeholder=\"Enter email\"]").fill(Config.Email) # Click #login-submit page.locator("#login-submit").click() with sync_playwright() as playwright: run(playwright) Web scraping as a standalone activity

We have the generated code so the first thing to do with it is to run it from command line. This will result in a new Chrome window's accessing Jira - it is Chrome, not Blackberry, because that is the default for Playwright.

The window will close soon enough but this is fine, that code only demonstrates a principle, it is not a full integration task.

python /path/to/play1.py

It is also useful that we can run the same Python module from our IDE, giving us the ability to step through the code line by line, observing what changes when and why.

Web scraping as an API service

Finally, we are ready to invoke the standalone module from an API service, as in the following code that we are also going to make available as a REST channel.

A couple of notes about the Python service below:

  • We invoke Playwright in a subprocess, as a shell command
  • We accept input through data models although we do not provide any output definition because it is not needed here
  • When we invoke Playwright, we set the APP_HEADLESS to True which will ensure that it does not attempt to actually display a Chrome window. After all, we intend for this service to run on Linux servers, in backend, and such a thing will be unlikely to work in this kind of an environment.

Other than that, this is a straightforward Zato service - it receives input, carries out its work and a reply is returned to the caller (here, empty).

# -*- coding: utf-8 -*- # stdlib from dataclasses import dataclass # Zato from zato.server.service import Model, Service # ########################################################################### @dataclass(init=False) class WebScrapingDemoRequest(Model): email: str password: str # ########################################################################### class WebScrapingDemo(Service): name = 'demo.web-scraping' class SimpleIO: input = WebScrapingDemoRequest def handle(self): # Path to a Python installation that Playwright was installed under py_path = '/path/to/python' # Path to a Playwright module with code to invoke playwright_path = '/path/to/the-playwright-module.py' # This is a template script that we will invoke in a subprocess command_template = """ APP_EMAIL={app_email} APP_PASSWORD={app_password} APP_HEADLESS=True {py_path} {playwright_path} """ # This is our input data input = self.request.input # type: WebScrapingDemoRequest # Extract credentials from the input .. email = input.email password = input.password # .. build the full command, taking all the config into account .. command = command_template.format( app_email = email, app_password = password, py_path = py_path, playwright_path = playwright_path, ) # .. invoke the command in a subprocess .. result = self.commands.invoke(command) # .. if it was not a success, log the details received .. if not result.is_ok: self.logger.info('Exit code -> %s', result.exit_code) self.logger.info('Stderr -> %s', result.stderr) self.logger.info('Stdout -> %s', result.stdout) # ###########################################################################

Now, the REST channel:

The last thing to do is to invoke the service - I am using curl from the command line below but it could very well be Postman or a similar option.

curl localhost:17010/demo/web-scraping -d '{"email":"hello@example.com", "password":"abc"}' ; echo

There will be no Chrome window this time around because we run Playwright in the headless mode. There will be no output from curl either because we do not return anything from the service but in server logs we will find details such as below.

We can learn from the log that the command took close to 4 seconds to complete, that the exit code was 0 (indicating success) and that is no stdout or stderr at all.

INFO - Command ` APP_EMAIL=hello@example.com APP_PASSWORD=abc APP_HEADLESS=True /path/to/python /path/to/the-playwright-module.py ` completed in 0:00:03.844157, exit_code -> 0; len-out=0 (0 Bytes); len-err=0 (0 Bytes); cid -> zcmdc5422816b2c6ff9f10742134

We are now ready to continue to work on it - for instance, you will notice that the password is visible in logs and this should not be allowed.

But, all such works are extra in comparison with the main theme - we have Playwright, which is a a tool that allows us to quickly integrate with frontend applications and we can automate it through API services. Just as expected.

Next steps More blog posts
Categories: FLOSS Project Planets

The Drop Times: Closing Chapter: Reflecting on My Time with The DropTimes

Planet Drupal - Mon, 2024-05-27 02:30

Dear Readers,

As I write my The DropTimes newsletter, I'm filled with a bittersweet blend of gratitude and nostalgia. When I first joined The DropTimes, my understanding of Drupal was minimal, but stepping into this expansive world, I was not only educated but deeply inspired by the robust spirit of our community. Throughout my tenure, I've had the unique privilege to connect with many of you—talented individuals from across the globe, each sharing the same passion and dedication.

Over these months, The DropTimes has stood as a never-fading testimony to the vibrant and ever-evolving Drupal world, chronicling its achievements, challenges, and the incredible community that drives its success. Today, I am sharing not just another update, but a personal farewell. May the coming chapters of my life lead me towards new beginnings, filled with personal and professional growth.
As I close this significant chapter at The DropTimes, I want to extend my deepest gratitude to all of you—my colleagues, our readers, and the entire Drupal community—for the support, inspiration, and camaraderie. It has been a profound journey, one that has enriched me beyond words, and I look forward to carrying these memories and lessons with me into my future endeavors.

So, with that said, let me, for the last time, take you through the stories we covered last week.

Kazima Abbas, a sub-editor at TDT, unveils insights from two significant events. Acquia Engage London 2024, which took place from May 21 to 22 marked the first European stop of the 2024 Digital Freedom Tour. It convened digital leaders who shared their expertise, insights, and practical tips on crafting impactful customer experiences. Learn more here. The next event is, Evolve Drupal Montreal 2024, organized by Evolving Web following the success of EvolveDrupal Atlanta. This upcoming summit, set for June 14, 2024, marks its return to Montreal where it debuted in May 2023. Read about this in detail here.

A few other important updates are; Drupal has launched the IXP Fellowship Initiative survey to bolster support for inexperienced developers looking to kickstart their careers in the Drupal ecosystem. By defining core competencies and gathering community input, this initiative aims to bridge the gap between training and practical experience, ultimately nurturing new talent within the community. Participate in shaping the future of Drupal development and read more about the initiative here.

Drupal 11 is set to remove several long-standing modules, such as Actions UI, Book, and Forum, in a bid to streamline its core functionality and focus on innovation. However, users need not fret as these features will still be accessible through contributed modules. This strategic move underscores Drupal's commitment to empowering site builders and ensuring a lean, efficient platform for ambitious digital experiences. Learn more about the changes and their implications here.

Michael Anello, on DrupalEasy, sheds light on the pressing need for fresh talent in the Drupal community, as evidenced by the concerning lack of new developers highlighted at DrupalCon Portland 2024. With only 9.1% of respondents under 30 in the 2024 Drupal Developer Survey, urgent action is needed to attract and retain young developers. Michael proposes several strategic measures, including modernizing Drupal's code and creating educational programs, to address this challenge. Get involved in shaping the future of Drupal development and read more about Michael's insights here.

New dates have been announced for DrupalCon Asia 2024, set to take place in Singapore from December 9th to 11th, 2024. Know more about the three-day event here. Applications are now open for grants and scholarships to attend DrupalCon Barcelona 2024 until June 28th, 2024. The initiative, led by the Drupal Association in partnership with Kuoni Tumlare Congress, aims to promote diversity and inclusivity within the open-source community.

The Drupal Brisbane meetup is scheduled to resume on June 18, 2024. This event offers both in-person attendance at Brisbane Square Library and the option to participate online, providing an opportunity for individuals to engage in discussions surrounding Drupal and contribute to the community. Interested participants are encouraged to submit their topic suggestions, fostering an inclusive environment for collaborative discourse. A complete list of events for the week is available here.

The Technical Working Group (TWG) has scheduled a final discussion on proposed changes to Drupal's coding standards for June 5, 2024, UTC. The focus of this discussion will be the coding style for PHP Enumerations, inviting community input to refine Drupal's coding practices.

The Drupal Association has appointed Simba Ndemera as its new Chief Financial and Operations Officer, effective April 2024. With nearly three decades of experience in finance and a strong background in nonprofit accounting, Simba brings valuable expertise to his role. His dedication to community service and advancing open-source technology aligns perfectly with the organization's mission, promising a collaborative effort toward progress and inclusivity in the tech industry.

Bluechip Tech Limited, headquartered in the UK, has unveiled a new training course focusing on Drupal responsive design. Geared towards educating participants on crafting responsive and adaptive designs with Drupal and its modules, the course covers essential principles and techniques. Learn more about this new training here. Additionally, Evolving Web is offering a series of in-person Drupal training sessions next month in Montreal, aimed at enhancing digital practices for teams and individuals. These full-day training sessions, scheduled for June 11 to 13, cover crucial aspects of Drupal, providing participants with expert knowledge and practical skills.

Frontkom has announced the imminent release of Drupal Gutenberg 3.0.0, promising enhanced customization options and improved support for content blocks in Drupal. This update, designed to simplify content creation with advanced style controls and user-defined patterns, aims to elevate the user experience within the Drupal ecosystem.

Introducing the Time Machine module for Drupal, crafted by Mandip Singh, offering administrators seamless site restoration capabilities to any desired point in time. With comprehensive rollback features covering content, configuration, and user data, this module ensures robust disaster recovery and facilitates safe experimentation. Read more about this new module here.

The Drupal Association has issued an update on its Global Accessibility Awareness Day (GAAD) Pledge for 2024, reaffirming Drupal's commitment to accessibility standards. Led by Mike Gifford, the Drupal accessibility maintainers are actively working to align with WCAG 2.2 AA standards, aiming for inclusivity across the platform. With ongoing efforts to address accessibility issues and promote community involvement, Drupal continues its mission to ensure accessibility for all users.

We acknowledge that there are more stories to share. However, due to selection constraints, we must pause further exploration for now.

To get timely updates, follow us on LinkedIn, Twitter and Facebook. Also, join us on Drupal Slack at #thedroptimes.

For the Last Time,
Sincerely,
Elma John
Sub-editor, The DropTimes.

Categories: FLOSS Project Planets

Quansight Labs Blog: Dataframe interoperability - what has been achieved, and what comes next?

Planet Python - Sun, 2024-05-26 20:00
An overview of the dataframe landscape, and solution to the "we only support pandas" problem
Categories: FLOSS Project Planets

FOSSASIA 2024: An Unforgettable Experience in Vietnam

Planet KDE - Sun, 2024-05-26 18:53
Journey to Vietnam

Embarking on my journey to FOSSASIA 2024, I felt a mix of excitement and anticipation. As I boarded my flight, the reality of the adventure ahead began to sink in. I arrived early in the morning on April 7th, filled with enthusiasm and a bit of jet lag, ready to dive into the vibrant culture of Vietnam and the exhilarating events planned for the conference.

Arrival in Hanoi

Landing in Hanoi, I was greeted by the warm, humid air and a bustling airport scene. I quickly hopped onto Bus Express 86, which conveniently took me from the airport to the heart of the city. The ride itself was a mini-tour, offering glimpses of Hanoi’s unique blend of traditional and modern architecture. My destination was Hotel LakeSide, where I was warmly welcomed and offered an early check-in at no extra cost—a gesture that felt like a blessing after a long flight. The hotel staff’s generosity allowed me to freshen up and catch a few hours of much-needed sleep before the scheduled Hanoi City Tour organized by FOSSASIA.

A special shout-out goes to Lily’s Travel Agency for arranging an amazing stay throughout the conference days. For transportation, I relied on Grab, a widely used cab application in Vietnam similar to Uber. Despite the language barrier, the local drivers were exceptionally supportive and always willing to go the extra mile, which made commuting around the city a breeze.

Exploring Hanoi

By 2 pm, I was ready for the city tour, excited to explore Hanoi’s rich history and culture. Our itinerary included visits to the One Pillar Pagoda, the Temple of Literature & National University, and the Imperial Citadel of Thang Long. Each site was more breathtaking than the last, steeped in history and surrounded by lush greenery. It was a day filled with fun, laughter, and new friendships. We shared stories, took countless photos, and even indulged in some silly antics that added a touch of whimsy to the day (try to find it in the photos!).

After the tour concluded around 7 pm, we split into groups to find dinner. I joined a few new friends for a delicious meal at a local restaurant, savoring the flavors of authentic Vietnamese cuisine. Later, I met Phu Nguyen, a KDE contributor from Hanoi, to collect a monitor for our booth. Phu, who works in Germany, couldn’t attend the event but was incredibly helpful in providing the display. With the monitor in hand, I returned to my hotel, reflecting on a day well spent and eagerly anticipating the start of the conference the next day.

Conference Kickoff

Each morning began with a hearty breakfast at the hotel, overlooking the serene Giang Vo Lake. Armed with hardware, promotional materials, and stickers, I set off for the Posts and Telecommunications Institute of Technology, where FOSSASIA 2024 was held. The venue buzzed with excitement, as hundreds of students and tech enthusiasts gathered around, curious about the event.

Our Booth: The Focal Point

Our booth, strategically placed next to FSFE, COSCUP, and CalyxOS, quickly became the busiest spot at the venue. With the invaluable help of Aniqa and Paul from the KDE promo team, we had an impressive setup that drew in crowds continuously. Aniqa and Paul were instrumental in organizing the booth, ensuring we had everything we needed, and providing ongoing support throughout the event. Tomaz joined us on the first day, bringing a reMarkable tablet and his personal laptop to enhance our setup. Our booth was a vibrant hub of activity, attracting attendees with emulations of Nintendo games and Tomaz’s impressive origami skills. He exchanged his intricate origami creations for discussions about KDE, engaging many curious students.

Engaging with the Community

The conference was a vibrant hub of activity, featuring organizations such as FreeCAD, AlmaLinux, and fossunited. It was inspiring to see the diversity of projects and the passion driving each community. Our booth quickly became the most popular, with attendees lining up to learn about KDE’s latest projects and initiatives. Despite most attendees being college students with limited funds, their enthusiasm for KDE was overwhelming. Tomaz’s origami talent truly stood out, drawing significant attention and sparking numerous conversations about open-source software and community contributions.

Walking through the venue, I marveled at the variety of booths and the innovative projects on display. The FreeCAD team showcased their latest developments in open-source CAD software, while AlmaLinux representatives engaged attendees with their enterprise-grade Linux distribution. fossunited’s booth was a hive of activity, emphasizing their mission to promote and support open-source projects in India. Each interaction was a learning opportunity, and the camaraderie among the open-source communities was palpable.

Evening Get-Togethers

Evenings were spent exploring the local culture, including a memorable Evening Get Together at Ta Hien Beer and Food Street. The lively atmosphere, coupled with delicious street food and refreshing drinks, made for perfect networking opportunities and deepened the bonds formed during the day. The conference days were a whirlwind of activity, leaving us with lasting impressions and numerous connections.

Sapa Valley Trek and Farewell

After the conference, I ventured to Sapa Valley for a three-day trek. The journey was a stark contrast to the bustling city, offering a tranquil escape into nature. Walking among the rice paddies, often alongside local farmers, and soaking in the serene landscape was a rejuvenating experience. The trek left me with fond memories of the picturesque valley, the warmth of the local people, and the breathtaking beauty of Vietnam’s countryside.

I returned on April 15th, with a heart full of memories and a mind brimming with inspiration from FOSSASIA 2024. The conference not only highlighted the incredible work being done in the open-source community but also showcased the rich culture and hospitality of Vietnam. Reflecting on the experience, I felt a deep sense of gratitude and excitement for the future.

I want to extend my heartfelt thanks to KDE e.V. for sponsoring my attendance at this event. Their support made this enriching experience possible, and I am profoundly grateful for the opportunity to represent the KDE Community at FOSSASIA 2024. Until next time, FOSSASIA!

Gallery Tomaz Tomaz and me with the booth! Attendees at the KDE booth Even more attendees Tomaz multitasking with Origami! reMarkable tablet The booth bustling with attendees Tomaz and Origami Hong Phuc Dang with Konqi! Train Street! Train Street! Again! Ubuntu! Anuvrat at the Food Street! Food Street! Origami Horse by Tomaz! From the International Lounge at Delhi Airport An interesting mode of transport Assembly for the Ha Noi City Tour Temple of Literature! Souvenirs! Souvenirs! Artists on work! The Tomb The Citadel One Pillar Pagoda! Inside the One Pillar Pagoda! View from the One Pillar Pagoda! My guide during the trek, Mama Mao! Sa Pa Valley Getting down from the mountains Food at Trek
Categories: FLOSS Project Planets

Pages