Continuing the trend from Drupalcon Amsterdam, I hosted an informal BoF session at Drupalcon Barcelona, for freelancers to chat among themselves. As a lot more of Drupal's space is being occupied by big players these days, I like to think this helps "single players" carve out a space at the conference.
My notes are publicly available, as a g.d.o wiki page no less: so I won't add to them too much here. However, it's interesting to see that:
This Blog covers the highlights of how Drupal could be used effectively to build a multi domain, multilingual site for a large Multi National Company.
It has been nearly a month since I got back from the Randa Meetings this year, and the memories are still fresh in my mind. This was my first KDE event that I have attended, and the overall experience was awesome!
This year the event took place from September 6 to 13, and it started off with me and some other fellow participants meeting up at the Zurich airport on the morning of 6th to collect our train tickets. After some 2 hours of train journey which included some great scenic views from the windows we finally reached the Randa house, where we were warmly greeted by Mario and others who had already reached. Arrangement for lunch had already been made, and we enjoyed a great meal consisting of beef steak and spaghetti.
Later that day I met with some of the other developers. Till then I had only interacted with them via emails/IRC conversations, and finally getting to meet them in person was quite thrilling. In the evening we went for some star-gazing. Torsten from the Marble group had brought his own home-made telescope and invited us to join him outside. He started to build the whole telescope from its parts, and after it was ready he showed us many stars and constellations and shared many interesting facts. At last when it started to get too cold for me to even feel my legs we decided to wrap it up for the day.
We spent the following days mostly in the hacking rooms. There were several rooms allocated to different groups. I joined some of the KDE-Edu developers. I met with Andreas Cord-Landwehr, Aleix Pol, and some more developers who had helped me a lot with my tasks. I took up with porting of the remaining Edu apps to KF5/Qt5. I found that Kalzium’s port had already been started by someone, and I decided to continue it from there. By the end of the Randa Meetings I was able to complete most of the port.
We also went for some trips every now and then. After long hours spent in coding it was kind of refreshing to stretch out your legs and enjoy a walk along the hilly roads. Many a times in the evening we would go for some short nearby trekking. One day we all went to Zermatt. Mario had arranged for some cars, and we spent the entire afternoon there. The return trip was almost some 3-4 hours of trekking, and I for one was exhausted like anything. Finally we reached the Randa house towards the evening, and soon we had a splendid dinner. One of the awesome things about the Randa meetings is the food. You are always served with extremely delicious dishes all the time which you never get tired of. Moreover there are always fruits and coffee/tea kept for you.
Amidst all these memorable moments the 7 days came to an end (very quickly it seemed), and it was time for goodbyes. Some of them had already left by the evening of 12th. I and a few of my friends left the next day. This event was really a memorable one, and I owe Mario a big thanks for organising it so perfectly. A lot of projects were completed by the developers, and the week spent there was very fruitful. I sincerely hope that such events are again held in future, and I am looking forward to attend them.
No matter what tool you use to create a website, you still need to put time into planning before you actually start designing and building the site. If you rush to start with the design and build process you run the risk of having a project that takes more time and money than desired and generates less of a return on investment. There are key questions you need to answer to ensure that you create a clear and comprehensive website definition document.
Drupal has long been described as a content management system for developers. It’s been criticized for its Drupal-centric jargon and characterized as unfriendly to inexperienced and experienced web site creators alike. In the DrupalCon Barcelona 2007 State of Drupal address, project creator Dries Buytaert stressed the need to focus on Drupal’s usability.
Not long afterward, the first formal usability study took place at the University of Minnesota, just after the release of Drupal 6 in February, 2008. Several studies of Drupal 7 were conducted in subsequent years. In June, 2015, community members returned to the university for Drupal 8’s first formal evaluation.
These formal usability tests are just one metric about Drupal’s user experience. Anyone who has introduced a new site builder to Drupal, or tried to help a Dreamweaver-savvy friend get started, has a pretty good idea where existing major challenges lie. Drupal.org has methodology suggestions to empower anyone to conduct their own studies, which can take place any time. New features in Drupal 8 are evaluated as they’re introduced, as well. For example, the Drupal User Experience team has conducted more than 70 informal sessions on Drupal 8-specific changes. The formal studies, however, lend a certain gravitas to recommendations for improvements; as we return to Barcelona for DrupalCon 2015, the history from formal evaluations provides a valuable metric to reflect on how far the project has come.
When I was invited to attend Drupal 8’s study, I was eager and hesitant. Eager, because who doesn't want to geek out on eye tracking feedback and all the experience-capturing equipment while spending focused time with key players who are working toward sorely needed improvements? Hesitant, because four years into the development of Drupal 8 seemed like a difficult time in the cycle to introduce meaningful change.
The Software as Craft Philadelphia Meetup generously invited me down to Philly this week and they recorded my March To Triumph As A Mentor talk.
The video is queued at 11:20 when I start speaking. But if you back up to minute 4, Dana Giordano's lightning talk about transitioning into Rails coding mid-career is insightful and worth watching.
Every Wednesday, the Drupal Security Team publishes "Security Advisories" (or SA's) to tell users about security vulnerabilities in Drupal core and contrib modules, with advice on how to solve the issue so that their site is secure.
This is the second in a series of articles about how to better understand all the information in a security advisory, so that you know how to take the appropriate action for your site!
There are several different types of security vulnerabilities, each with a cryptic (and highly technical) name like Cross Site Scripting (XSS) or SQL Injection.
There's plenty of technical articles on the internet explaining what those mean from a coder perspective, including how to prevent them (by writing better code) or even how to exploit them.
But what do they mean for you, the site builder or site owner?
The most important question for you is: If an attacker exploits your site with a particular vulnerability, what will they be able to do to your site or users?
Of course, you should take action on any security advisory that affects your site as soon as possible (or hire someone else to do it). But what could happen if you didn't?
Some vulnerabilities would allow an attacker to completely take control over your site, whereas others would only allow them to access some non-public data. How can you tell which are which?
Read more to learn how the different vulnerability types could impact your site or users!
So. KDE has landed at Qt World Summit.
You can come and visit our booth and …
- hear about our amazing Free Qt Addons (KDE Frameworks)
- stories about our development tools
- meet some of our developers
- Talk about KDE in general
- Or just say hi!
KDE – 19 years of Qt Experience.
So. KDE has landed at Qt World Summit.
You can come and visit our booth and …
- hear about our amazing Free Qt Addons (KDE Frameworks)
- stories about our development tools
- meet some of our developers
- Talk about KDE in general
- Or just say hi!
KDE – 19 years of Qt Experience.
정확히 말하면 디자인 패턴은 아니라고 한다. 자세한 설명은 에 있다. Factory 패턴과 유사하나 외부 설정에 따라 다른 객체를 생성하는 패턴을 칭한다. 나쁜 패턴이라며 과 함께 Constructor Injection 같은 방법을 써야 한다는 주장이 있지만, 실제로 잘 작성된 오픈소스 프로젝트들에서도 이러한 구현을 꽤 많이 볼 수 있다.Builder
생성자에 전달되어야 할 파라메터가 다양해서 골치 아픈 경우 Builder 패턴이 좋은 해결책이 된다. 방법은 Builder 객체를 만들고 setter 를 통해 필요한 파라메터를 설정 한 후에 build() 메소드 호출을 통해 실제 객체를 생성한다.Delegation
처음 Delegation 패턴을 봤을 때는 Interface의 구현과 차이점을 잘 발견하지 못했었다. 위키 피디아에도 설명이 있지만 언제 써야 하는지가 설명되어 있지 않았다.  에서 이유를 찾았는데. 요약을 해보면,
- 원래 있는 객체의 동작을 그대로 유지하면서 동작의 앞뒤에 처리를 추가하고 싶을 때
- 호환되지 않는 인터페이스를 위한 Proxy 를 구현할 때
- 실제 구현 사용 시 복잡도가 높은 콜 루틴을 단순하게 제공하려고 할 때
경우에 따라 서브클래싱과 함께 쓸 수 있을 것 같으며 데코레이터 패턴에서 주로 나타나는 패턴인 것 같다.See Also
This post is loosely based on the first half of my “Finding more bugs with less work” talk for PyCon UK.
You have probably never written a significant piece of correct software.
That’s not a value judgement. It’s certainly not a criticism of your competence. I can say with almost complete confidence that every non-trivial piece of software I have written contains at least one bug. You might have written small libraries that are essentially bug free, but the chances of you having written whole programs which are are tantamount to zero.
I don’t even mean this in some pedantic academic sense. I’m talking about behaviour where if someone spotted it and pointed it out to you you would probably admit that it’s a bug. It might even be a bug that you cared about.
Why is this?
Well, lets start with why it’s not: It’s not because we don’t know how to write correct software. We’ve known how to write software that is more or less correct (or at least vastly closer to correct than the norm) for a while now. If you look at the NASA development process they’re pretty much doing it.
Also, if you look at the NASA development process you will pretty much conclude that we can’t do that. It’s orders of magnitude more work than we ever put into software development. It’s process heavy, laborious, and does not adapt well to changing requirements or tight deadlines.
The problem is not that we don’t know how to write correct software. The problem is that correct software is too expensive.
And “too expensive” doesn’t mean “It will knock 10% off our profit margins, we couldn’t possibly do that”. It means “if our software cost this much to make, nobody would be willing to pay a price we could afford to sell it at”. It may also mean “If our software took this long to make then someone else will release a competing product two years earlier than us, everyone will use that, and when ours comes along nobody will be interested in using it”.
(“sell” and “release” here can mean a variety of things. It can mean that terribly unfashionable behaviour where people give you money and you give them a license to your software. It can mean subscriptions. It can mean ad space. It can even mean paid work. I’m just going to keep saying sell and release).
NASA can do it because when they introduce a software bug they potentially lose some combination of billions of dollars, years of work and many lives. When that’s the cost of a bug, spending that much time and money on correctness seems like a great deal. Safety critical industries like medical technology and aviation can do it for similar reasons (buggy medical technology kills people, and you don’t want your engines power cycling themselves midflight).
The rest of us aren’t writing safety critical software, and as a result people aren’t willing to pay for that level of correctness.
So the result is that we write software with bugs in it, and we adopt a much cheaper software testing methodology: We ship it and see what happens. Inevitably some user will find a bug in our software. Probably many users will find many bugs in our software.
And this means that we’re turning our users into our QA department.
Which, to be clear, is fine. Users have stated the price that they’re willing to pay, and that price does not include correctness, so they’re getting software that is not correct. I think we all feel bad about shipping buggy software, so let me emphasise this here: Buggy software is not a moral failing. The option to ship correct software is simply not on the table, so why on earth should we feel bad about not taking it?
But in another sense, turning our users into a QA department is a terrible idea.
Why? Because users are not actually good at QA. QA is a complicated professional skill which very few people can do well. Even skilled developers often don’t know how to write a good bug report. How can we possibly expect our users to?
The result is long and frustrating conversations with users in which you try to determine whether what they’re seeing is actually a bug or a misunderstanding (although treating misunderstandings as bugs is a good idea too), trying to figure out what the actual bug is, etc. It’s a time consuming process which ends up annoying the user and taking up a lot of expensive time from developers and customer support.
And that’s of course if the users tell you at all. Some users will just try your software, decide it doesn’t work, and go away without ever saying anything to you. This is particularly bad for software where you can’t easily tell who is using it.
Also, some of our users are actually adversaries. They’re not only not going to tell you about bugs they find, they’re going to actively try to keep you from finding out because they’re using it to steal money and/or data from you.
So this is the problem with shipping buggy software: Bugs found by users are more expensive than bugs found before a user sees them. Bugs found by users may result in lost users, lost time and theft. These all hurt the bottom line.
At the same time, your users are a lot more effective at finding bugs than you are due to sheer numbers if nothing else, and as we’ve established it’s basically impossible to ship fully correct software, so we end up choosing some level of acceptable defect rate in the middle. This is generally determined by the point at which it is more expensive to find the next bug yourself than it is to let your users find it. Any higher or lower defect rate and you could just adjust your development process and make more money, and companies like making money so if they’re competently run will generally do the things that cause them to do so.
This means that there are only two viable ways to improve software quality:
- Make users angrier about bugs
- Make it cheaper to find bugs
I think making users angrier about bugs is a good idea and I wish people cared more about software quality, but as a business plan it’s a bit of a rubbish one. It creates higher quality software by making it more expensive to write software.
Making it cheaper to find bugs though… that’s a good one, because it increases the quality of the software by increasing your profit margins. Literally everyone wins: The developers win, the users win, the business’s owners win.
And so this is the lever we get to pull to change the world: If you want better software, make or find tools that reduce the effort of finding bugs.
Obviously I think Hypothesis is an example of this, but it’s neither the only one nor the only one you need. Better monitoring is another. Code review processes. Static analysis. Improved communication. There are many more.
But one thing that won’t improve your ability to find bugs is feeling bad about yourself and trying really hard to write correct software then feeling guilty when you fail. This seems to be the current standard, and it’s deeply counter-productive. You can’t fix systemic issues with individual action, and the only way to ship better software is to change the economics to make it viable to do so.
Edit to add: In this piece, Itamar points out that another way of making it cheaper to find bugs is to reduce the cost of when your users do find them. I think this is an excellent point which I didn’t adequately cover here, though I don’t think it changes my basic point.
In this Drupalize.Me interview, we interview Scott Wilkinson, a builder of Drupal sites that solve problems for his freelance clientele. This interview is part of an ongoing series where we talk with a variety of people in the Drupal community about the work they do. Each interview focuses on a particular Drupal role and this interview with Scott focuses the site builder role, filled by a person who builds Drupal sites by expertly piecing together and configuring modules, themes, and settings.
This week we welcome Michael Fogleman as our PyDev of the Week! Mr. Foglebird has been helping out Python programmers prodigiously on StackOverflow for quite some time. I know I’ve appreciated some of his answers when I’ve gone to look for help. He has a large list of personal programming projects on his website that are well worth a look through. Let’s take some time to learn some more about him.
Can you tell us a little about yourself (hobbies, education, etc):
I’ve been programming since I was about 8. We had a Commodore 64 and I would copy programs out of a book. Later on, in high school, I wrote games and other (mostly graphical) programs using QBasic. Most of the games were clones: Tetris, Breakout, Galaga, Asteroids, etc. I remember that I wrote the Asteroids one before I even learned about arrays! Can you imagine? asteroid1x, asteroid1y, asteroid2x, asteroid2y. I also used WHILE loops instead of FOR loops. That’s what happens when you’re a kid and have nothing but the QBasic help documentation to learn from.
When I went off to college (NC State University) in 2000, I planned on majoring in mechanical engineering for some reason. As part of that curriculum, I took an intro to C++ class and I loved it. I realized that I should really be majoring in computer science, and that’s when I switched.
To help pay my way through school, I participated in the “cooperative education” program which means some semesters I didn’t take classes but instead worked full time at companies that partnered with the university. I first worked at a company with about 100 employees that dealt with industrial automation. It was a lot of fun because I was the only software developer yet there were a lot of opportunities for software to improve their systems and workflows. Then I worked at IBM for a while. For a poor college student, they paid pretty well. I managed to finish college with almost no debt. It took 5 years even though I only took classes for 3.5 of them, but the experience and income proved invaluable.
After college, I worked at TopCoder. I only stayed for a few years because I was doing more management and documentation than development.
After that, I worked at Advanced Liquid Logic for about five years. They had an interesting microfluidics platform for genomics and diagnostics. It was a “lab on a chip” system that could move discrete droplets of liquid around inside of a cartridge. I wrote some really cool software (in Python) that would plan these droplet movements to carry out the desired protocols. We were acquired by Illumina and I chose to leave shortly thereafter instead of moving to San Diego like they wanted me to.
Now I’m at TransLoc where I work on transit-related software and data.
I’m married and have two young kids that are a lot of fun. I still manage to find time for gardening, running and working on lots of side projects. I also really like space exploration, 80s music and all types of food.
Why did you start using Python?
My first experience with Python was in 2004 for a project in college, but I didn’t get too deep into it at that point. It was just this weird, strangely different language. In 2007 I started learning Python on the side and loved its terse, readable, powerful syntax. In 2008 I switched jobs and started working with Python full time.
What other programming languages do you know and which is your favorite?
Earlier this year I started using Go, so that’s what’s new and exciting for me right now. I found it really easy to pick up. I’ve already written a path tracer (like a ray tracer, but better) and an NES emulator in Go. I’ve also used it for some things at work. With Python and C being my primary tools for a while, Go seems to be a nice middle ground.
I’ve also done some Objective-C on the side and have some apps in the iOS and Mac app stores.
I still use Python the most.
What projects are you working on now?
At work I just finished up a really cool project that automates Demand-Response transit systems. Basically, it dynamically routes vehicles in an optimal way to service on-demand rides quickly and efficiently. Most of the backend is in Python and Flask. I wrote the vehicle scheduling algorithm in Go since it needed to be fast.
At home I’ve been playing with a HandiBot CNC machine. I use Python to generate the GCode programs that the CNC understands.
I’ve also been playing with OpenStreetMap data and shapefiles.
You can read more about my projects:
Which Python libraries are your favorite (core or 3rd party)?
I love Flask. It makes web development really fun. Requests is also really nice. wxPython is my favorite for developing desktop apps. I used it heavily at my previous job. And Shapely is awesome for dealing with 2D geometry.
I wrote my own library for developing simple OpenGL apps. It’s called “pg” and I did some cool stuff with it but haven’t worked on it lately.
Where do you see Python going as a programming language?
I haven’t thought about it much, but I have a suspicion that in 10 years or so, Python will be slowly fading away like Java seems to be today. Technology changes fast. But just like Java, it will have a strong foothold. It’s not going anywhere any time soon.
I still use Python 2. Mostly because all of our codebase at work uses 2. There are some neat things in 3, but it hasn’t been enough to tip the scales for me yet. Frankly it’s not something I worry about.
What is your take on the current market for Python programmers?
There’s definitely some demand and it’s probably still increasing. I think Django and scientific Python are the main drivers.
Is there anything else you’d like to say?
A lot of people ask me how I’m able to work on so many side projects with a family and full time job. Partly, it’s priorities. I don’t watch TV. Also, the kids can be a useful “distraction” in that when I do finally get time at the computer, I tend to spend it more wisely. Plus, they still go to bed pretty early. Other than that, I’m just really passionate about programming, so when I get interested in something, I work hard at it.
Thanks!The Last 10 PyDevs of the Week
This is the second part of the series of improvements in warmup time and memory consumption in the PyPy JIT. This post covers recent work on sharing guard resume data that was recently merged to trunk. It will be a part of the next official PyPy release. To understand what it does, let's start with a loop for a simple example:class A(object): def __init__(self, x, y): self.x = x self.y = y def call_method(self, z): return self.x + self.y + z def f(): s = 0 for i in range(100000): a = A(i, 1 + i) s += a.call_method(i)
At the entrance of the loop, we have the following set of operations:guard(i5 == 4) guard(p3 is null) p27 = p2.co_cellvars p28 = p2.co_freevars guard_class(p17, 4316866008, descr=<Guard0x104295e08>) p30 = p17.w_seq guard_nonnull(p30, descr=<Guard0x104295db0>) i31 = p17.index p32 = p30.strategy guard_class(p32, 4317041344, descr=<Guard0x104295d58>) p34 = p30.lstorage i35 = p34..item0
The above operations gets executed at the entrance, so each time we call f(). They ensure all the optimizations done below stay valid. Now, as long as nothing out of the ordinary happens, they only ensure that the world around us never changed. However, if e.g. someone puts new methods on class A, any of the above guards might fail. Despite the fact that it's a very unlikely case, PyPy needs to track how to recover from such a situation. Each of those points needs to keep the full state of the optimizations performed, so we can safely deoptimize them and reenter the interpreter. This is vastly wasteful since most of those guards never fail, hence some sharing between guards has been performed.
We went a step further - when two guards are next to each other or the operations in between them don't have side effects, we can safely redo the operations or to simply put, resume in the previous guard. That means every now and again we execute a few operations extra, but not storing extra info saves quite a bit of time and memory. This is similar to the approach that LuaJIT takes, which is called sparse snapshots.
I've done some measurements on annotating & rtyping translation of pypy, which is a pretty memory hungry program that compiles a fair bit. I measured, respectively:
- total time the translation step took (annotating or rtyping)
- time it took for tracing (that excludes backend time for the total JIT time) at the end of rtyping.
- memory the GC feels responsible for after the step. The real amount of memory consumed will always be larger and the coefficient of savings is in 1.5-2x mark
Here is the table:branch time annotation time rtyping memory annotation memory rtyping tracing time default 317s 454s 707M 1349M 60s sharing 302s 430s 595M 1070M 51s win 4.8% 5.5% 19% 26% 17%
Obviously pypy translation is an extreme example - the vast majority of the code out there does not have that many lines of code to be jitted. However, it's at the very least a good win for us :-)
We will continue to improve the warmup performance and keep you posted!
Debian was not generally seen as a bleeding-edge distribution, but it offered a perfect combination of stability and up-to-date software in our field when we chose the platform for our signature verification project. Having an active Debian Developer in the team also helped ensuring that packages which we use were in good shape when the freeze, then the release came and we can still rely on Jessie images with only a few extra packages to run our software stack.
Not having to worry about the platform, we could concentrate on the core project and I’m proud to announce that our start-up‘s algorithm won this year’s Signature Verification Competition for Online Skilled Forgeries (SigWIComp2015) . The more detailed story can be read already in the English business news and is also on index.hu, a leading Hungarian news site. We are also working on a solution for categorizing users based on cursor/finger movements for targeting content, offers and ads better. This is also covered in the articles.
The verification task was not easy. The reference signatures were recorded at very low resolution and frequency and the forgers did a very good job in forging them creating a true challenge for everyone competing. At first glance it is hard to imagine that there is usable information in such small amount of recorded data, but our software is already better than me, for example in telling the difference between genuine and forged signatures. It feels like when the chess program beats the programmer again and again.
I would like to thank you all, who helped making Debian an awesome universal operating system and hope we can keep making every release better and better!
Alexander Shorin’s work on more configurable unicode generation in Hypothesis has to do some interesting slicing of ranges of unicode categories. Doing both generation and shrinking in particular either required two distinct representations of the data or something clever. Fortunately I’d previously figured out the details of the sort of data structure that would let you do the clever thing a while ago and it was just a matter of putting the pieces together.
The result is an interesting purely functional data structure based on Okasaki and Gill’s “Fast Mergeable Integer Maps”. I’m not totally sure we’ll end up using it, but the data structure is still interesting in its own right.
The original data structure, which is the basis of Data.IntMap in Haskell, is essentially a patricia trie treating fixed size machine words as strings of 0s and 1s (effectively a crit-bit trie). It’s used for implementing immutable mappings of integers with fast operations on them (O(log(n)) insert, good expected complexity on union).
With some small twists on the data structure you can do some interesting things with it.
- Ditch the values (i.e. we’re just representing sets)
- Instead of tips being a single key, tips are a range of keys start <= x < end.
- Split nodes are annotated with their size and the smallest interval [start, end) containing them.
When using this to represent sets of unicode letters this is extremely helpful – most of the time what we’re doing is we’re just removing one or two categories, or restricting the range, which results in a relatively small number of intervals covering a very large number of codepoints.
Let T be the number of intervals and W the word size. The data structure has the following nice properties:
- Getting the size of a set is O(1) (because everything is size annotated or can have its size calculated with a single arithmetic operation)
- Indexing to an element in sorted order is O(log(T)) because you can use the size annotation of nodes to index directly into it – when indexing a split node, check the size of the left and right subtrees and choose which one to recurse to.
- The tree can be automatically collapse tointervals in many cases, because a split node is equivalent to an interval if end = start + size, which is a cheap O(1) check
- Boolean operations are generally O(min(W, T)), like with the standard IntSet (except with intervals instead of values)
- Range restriction is O(log(T)).
Note that it isn’t necessarily the case that a tree with intervals [x, y) and [y, z) in it will compress this into the interval [x, z) because their common parent might be further up the tree.
An extension I have considered but not implemented is that you could potentially store very small subtrees as arrays in order to flatten it out and reduce indirection.
In particular the efficient indexing is very useful for both simplification and generation, and the fact that merging efficiently is possible means that we can keep two representations around: One for each permitted category (which helps give a better distribution when generating) and one for the full range (which makes it much easier to simplify appropriately).
Here is an implementation in Python. It’s not as fast as I’d like, but it’s not unreasonably slow. A C implementation would probably be a nice thing to have and is not too difficult to do (no, really. I’ve actually got a C implementation of something similar lying around), but wouldn’t actually be useful for the use case of inclusion in Hypothesis because I don’t want to add a C dependency to Hypothesis just for this.
We are finally almost there to release Drupal 8 RC1. For us, the release candidate means that Drupal has a stable API, a feature freeze and "should" be free of critical bugs as far as there are no new ones found. That counts for Drupal8 core. We will start with Drupal 8 projects from the release of RC1. The question is only: How to create estimations for a system we, honestly spoken, don't know yet with the same depth of details as we know Drupal 7. There are so many pitfalls related to the decision making of Drupal 8 architectures. As we usually need many contrib modules in our Drupal applications, and this will not change in Drupal 8, they are not yet ready and stable enough for a bug free experience in our development team. So we are all in a difficult situation where we want to start with Drupal 8 from now on the one side but want to be able to estimate projects reliably on the other site. There needs to be some trade-offs which I want to discuss in this blog post - so please feel free to add your thoughts in the comments below.
1) Focus on Drupal core and REST
2) Try to split projects
The bigger the project we want to build with Drupal (not only Drupal 8) the bigger the risk. As in Germany most clients want fixed prices for their projects, we need a fine-grained planning of small feature junks. Keeping project requirements as small as needed reduces the risk to oversee some details that will crash your estimation during the development. From an agile perspective small development steps with detailed requirements reduce also the risk to build a product that nobody needs.
3) Give some discount to contribute
We will offer our clients an up to 10% cheaper price if they pay us by the hour and allow us to fix bugs in core and contribs. This helps us to contribute and improve the code base of Drupal 8 and contrib modules. Our clients as well as the community will benefit from a fast growing and stable code base in the future as they put their strategy on Drupal 8.
4) Update early and often
Whereas the update frequency in Drupal 7 becomes slower, Drupal 8 is almost there and as more and more people will use Drupal 8 and contrib modules, the more bugs will be reported and hopefully fixed. This means for us that we need to update early and often to see code and feature changes early and react on them. Keeping modules out-dated for a long time, even for none security related updates will bring additional risk for a broken site. The bigger the difference between your current code base and the latest release, the bigger the risk that your site will break with your update. The same rule as working with GIT in a team Update / GIT pull early and often and fix issues immediately. That's why we integrate Drop Guard also in the process of our on-going development and not only in the deployment process of critical security updates. With the right setup, we don't need to care manually about our update workflow. Drop Guard will do. You can start with Drop Guard for free during our beta period and automate updates as well.
The python-suseapi 0.22 has been released last week. The version number shows nothing special, but one important change has happened - the development repository has been moved.
It's now under openSUSE project on GitHub, what makes it easier to find for potential users and also makes team maintenance a bit easier than under my personal account.
If you're curious what the module does - it's mostly usable only inside SUSE, providing access to some internal services. One major thing usable outside is the Bugzilla interface, which should be at one day replaced by python-bugzilla, but for now provides some features not available there (using web scraping).
Anyway the code has documentation on readhtedocs.org, so you can figure out yourself what it includes.
SASS and LESS preprocessors make front-end development much easier. To compile them to CSS automatically, you can use Grunt.js. Let’s see how to do it through the example of Ubuntu OS.1. sudo apt-get install npm
Install npm (package manager for node.js)2. sudo npm install -g grunt-cli
Install grunt-cli to run Grunt in any directory if it is there (this command does not install Grunt).Read more
This blog post is a follow-up of DrupalCon session Next Generation Graphics: SVG.
Drupalcon Barcelona has been a blast for me, I met a lot of old good friends and it recharged my Drupal batteries. Some people has asked me about the slides of my session. Sorry for the delay, I was knocked out by the drupalflu. It was not a myth. Here is some material and thoughts about SVG session:
Me, pretending not to be nervous:
See the slides at dcorb.github.io (with animated gifs and page transitions. But non-clickable links)
Slides at slideshare.net with clickable links, no gifs:Next generation Graphics: SVG Drupal core and SVG. You can help
Drupal 8 has 79 SVG files in core at the moment. Most of them are SVG icons from ry5n's Libricons used mostly in the toolbar.
Drupal 8 themes by default will look for a "logo.svg" file in the theme folder, instead of "logo.png". See Change record.
This change was introduced at the same time that we were converting the Druplicon logo from PNG to SVG.
There are plenty of graphic assets in Drupal core that could be converted to SVG, starting with the throbber icon. I'm not sure after Drupal 8 hits RC1, if still would be possible to convert them, though.
And if you have a great idea for using a SVG sprite technique to avoid 17 HTTP individual requests! for admin users, please help here.
I created a Twitter list of SVG Experts, if you are interested. A lot of fresh relevant information on SVG, quirks, tips and demos they share daily, that can't be found anywhere else. Some are SVG Working group members, some are developers working on SVG browser implementation, some SVG web developers, and some are creative people testing the limits of SVG in an artistic way.