%0 Conference Proceedings %B IFIP Advances in Information and Communication Technology 378 (OSS 2012) %D 2012 %T A Linguistic Analysis on How Contributors Solve Software Problems in a Distributed Context %A Masmoudi, Héla %A Boughzala, Imed %K bug report %K bugzilla %K linguistic %K text mining %X There is a little understanding of distributed solving activities in Open Source communities. This study aimed to provide some insights in this way. It was applied to the context of Bugzilla, the bug tracking system of Mozilla community. This study investigated the organizational aspects of this meditated, complex and highly distributed context through a linguistic analysis method. The main finding of this research shows that the organization of distributed problem-solving activities in Bugzilla isn’t based only on the hierarchical distribution of the work between core and periphery participants but on their implication in the interactions. This implication varies according to the status of each one participant in the community. That is why we distinguish their roles, as well as, the established modes to manage such activity. %B IFIP Advances in Information and Communication Technology 378 (OSS 2012) %I IFIP AICT, Springer %V 378 %P 322-330 %8 09/2012 %0 Conference Paper %B 2010 7th IEEE Working Conference on Mining Software Repositories (MSR 2010)2010 7th IEEE Working Conference on Mining Software Repositories (MSR 2010) %D 2010 %T Do stack traces help developers fix bugs? %A Schroter, Adrian %A Schröter, Adrian %A Bettenburg, Nicolas %A Premraj, Rahul %K bug fixing %K bug report %K debugging %K eclipse %K stack trace %X A widely shared belief in the software engineering community is that stack traces are much sought after by developers to support them in debugging. But limited empirical evidence is available to confirm the value of stack traces to developers. In this paper, we seek to provide such evidence by conducting an empirical study on the usage of stack traces by developers from the ECLIPSE project. Our results provide strong evidence to this effect and also throws light on some of the patterns in bug fixing using stack traces. We expect the findings of our study to further emphasize the importance of adding stack traces to bug reports and that in the future, software vendors will provide more support in their products to help general users make such information available when filing bug reports. %B 2010 7th IEEE Working Conference on Mining Software Repositories (MSR 2010)2010 7th IEEE Working Conference on Mining Software Repositories (MSR 2010) %I IEEE %C Cape Town, South Africa %P 118 - 121 %@ 978-1-4244-6802-7 %R 10.1109/MSR.2010.5463280 %> https://flosshub.org/sites/flosshub.org/files/118-10-msr.pdf %0 Journal Article %J IEEE Transactions on Software Engineering %D 2010 %T What Makes a Good Bug Report? %A Zimmermann, Thomas %A Premraj, Rahul %A Bettenburg, Nicolas %A Sascha Just %A Schroter, Adrian %A Weiss, Cathrin %K bug report %K Survey %X In software development, bug reports provide crucial information to developers. However, these reports widely differ in their quality. We conducted a survey among developers and users of APACHE, ECLIPSE, and MOZILLA to find out what makes a good bug report. The analysis of the 466 responses revealed an information mis- match between what developers need and what users supply. Most developers consider steps to reproduce, stack traces, and test cases as helpful, which are at the same time most difficult to provide for users. Such insight is helpful to design new bug tracking tools that guide users at collecting and providing more helpful information. Our CUEZILLA prototype is such a tool and measures the quality of new bug reports; it also recommends which elements should be added to improve the quality. We trained CUEZILLA on a sample of 289 bug reports, rated by developers as part of the survey. In our experiments, CUEZILLA was able to predict the quality of 31–48% of bug reports accurately. %B IEEE Transactions on Software Engineering %I IEEE Computer Society %C Los Alamitos, CA, USA %V 36 %P 618-643 %U http://dl.acm.org/citation.cfm?id=1453146 %R http://doi.ieeecomputersociety.org/10.1109/TSE.2010.63 %> https://flosshub.org/sites/flosshub.org/files/bettenburg-fse-2008.pdf %0 Conference Paper %B Proceedings of the 30th international conference on Software engineering %D 2008 %T An approach to detecting duplicate bug reports using natural language and execution information %A Wang, Xiaoyin %A Zhang, Lu %A Xie, Tao %A Anvik, John %A Sun, Jiasu %K bug report %K duplicate bug report %K execution information %K information retrieval %K natural language %X An open source project typically maintains an open bug repository so that bug reports from all over the world can be gathered. When a new bug report is submitted to the repository, a person, called a triager, examines whether it is a duplicate of an existing bug report. If it is, the triager marks it as DUPLICATE and the bug report is removed from consideration for further work. In the literature, there are approaches exploiting only natural language information to detect duplicate bug reports. In this paper we present a new approach that further involves execution information. In our approach, when a new bug report arrives, its natural language information and execution information are compared with those of the existing bug reports. Then, a small number of existing bug reports are suggested to the triager as the most similar bug reports to the new bug report. Finally, the triager examines the suggested bug reports to determine whether the new bug report duplicates an existing bug report. We calibrated our approach on a subset of the Eclipse bug repository and evaluated our approach on a subset of the Firefox bug repository. The experimental results show that our approach can detect 67%-93% of duplicate bug reports in the Firefox bug repository, compared to 43%-72% using natural language information alone. %B Proceedings of the 30th international conference on Software engineering %S ICSE '08 %I ACM %C New York, NY, USA %P 461–470 %@ 978-1-60558-079-1 %U http://doi.acm.org/10.1145/1368088.1368151 %R 10.1145/1368088.1368151 %0 Conference Paper %B Proceedings of the 2008 international working conference on Mining software repositories %D 2008 %T Towards a simplification of the bug report form in eclipse %A Herraiz, Israel %A Daniel M. German %A Jesus M. Gonzalez-Barahona %A Gregorio Robles %K bug fixing %K bug report %K bug tracking system %K classification %K eclipse %K msr challenge %K severity %X We believe that the bug report form of Eclipse contains too many fields, and that for some fields, there are too many options. In this MSR challenge report, we focus in the case of the severity field. That field contains seven different levels of severity. Some of them seem very similar, and it is hard to distinguish among them. Users assign severity, and developers give priority to the reports depending on their severity. However, if users can not distinguish well among the various severity options, they will probably assign different priorities to bugs that require the same priority. We study the mean time to close bugs reported in Eclipse, and how the severity assigned by users affects this time. The results shows that classifying by time to close, there are less clusters of bugs than levels of severity. We therefore conclude that there is a need to make a simpler bug report form. %B Proceedings of the 2008 international working conference on Mining software repositories %S MSR '08 %I ACM %C New York, NY, USA %P 145–148 %@ 978-1-60558-024-1 %U http://doi.acm.org/10.1145/1370750.1370786 %R http://doi.acm.org/10.1145/1370750.1370786 %0 Conference Paper %B Proceedings of the 28th international conference on Software engineering %D 2006 %T Who should fix this bug? %A Anvik, John %A Hiew, Lyndon %A Murphy, Gail C. %K bug fixing %K bug report %K bug report assignment %K bug triage %K eclipse %K Firefox %K gcc %K issue tracking %K machine learning %K problem tracking %X Open source development projects typically support an open bug repository to which both developers and users can report bugs. The reports that appear in this repository must be triaged to determine if the report is one which requires attention and if it is, which developer will be assigned the responsibility of resolving the report. Large open source developments are burdened by the rate at which new bug reports appear in the bug repository. In this paper, we present a semi-automated approach intended to ease one part of this process, the assignment of reports to a developer. Our approach applies a machine learning algorithm to the open bug repository to learn the kinds of reports each developer resolves. When a new report arrives, the classifier produced by the machine learning technique suggests a small number of developers suitable to resolve the report. With this approach, we have reached precision levels of 57% and 64% on the Eclipse and Firefox development projects respectively. We have also applied our approach to the gcc open source development with less positive results. We describe the conditions under which the approach is applicable and also report on the lessons we learned about applying machine learning to repositories used in open source development. %B Proceedings of the 28th international conference on Software engineering %S ICSE '06 %I ACM %C New York, NY, USA %P 361–370 %@ 1-59593-375-1 %U http://doi.acm.org/10.1145/1134285.1134336 %R 10.1145/1134285.1134336 %0 Journal Article %J Proceedings of the 2nd ICSE Workshop on Open Source %D 2002 %T High Quality and Open Source Software Practices %A T. Halloran %A W. Scherlis %K apache %K bug report %K bug tracker %K bug tracking system %K feature requests %K gcc %K gnome %K kde %K lines of code %K linux %K loc %K mozilla %K netbeans %K perl %K position paper %K python %K sloc %K source code %K Survey %K tomcat %K xfree86 %X Surveys suggest that, according to various metrics, the quality and dependability of today’s open source software is roughly on par with commercial and government developed software. What are the prospects for advancing to much higher levels of quality in open source software? More specifically, what attributes must be possessed by quality-related interventions for them to be feasibly adoptable in open source practice? In order to identify some of these attributes, we conducted a preliminary survey of the quality practices of a number of successful open source projects. We focus, in particular, on attributes related to adoptability by the open source practitioner community. %B Proceedings of the 2nd ICSE Workshop on Open Source %8 2002 %> https://flosshub.org/sites/flosshub.org/files/HalloranScherlis.pdf