Wednesday, September 19, 2007

conclusion questions

- what did I learn today?
- write 2 positive things about the symposium
- write 2 negative/critical things

papers session 1

Papers
Creating Predictive Models from Automated Static Analysis Alerts to Identify Vulnerability- and Attack-Prone
Components
Michael Gegick p. 1
Empirical Studies of a Flexible Method for Software Effort Estimation by Analogy
Jingzhou Li, Guenther Ruhe p. 9
Understanding the Role of Coordinating Mechanisms in Software Development Teams
Nils Brede Moe p. 17
Proposal of a Review Process of Empirical Studies in Software Engineering
Anna Grimán Padua p. 25
Preliminary Results for a Scale that Measures the Quality of Controlled Experiments in Computing and
Health Informatics
Keith Lui p. 52
Agile Processes and Aspects of Innovation in Software-based New Product Development
Tor Erlend Fægri p. 60
Software Engineering Processes under the Influence of Aesthetics and Art Projects
Salah Uddin Ahmed and Anna Trifonova p. 68
Aggregation Process with Multiple Evidence Levels for Experimental Studies in Software Engineering
Enrique Fernández p. 75

Papers (session 2)

Where Top-down Process Improvement Meets the Bottom-up problems?
Eugenia Egorova p. 33
Integrating Quantitative and Qualitative methods in Empirical Software Engineering
María Lázaro Gómez p. 38
Using Visual Metaphors Based on Metrics and Heuristics to Enhance Software Comprehension Activities
Glauco de Figueiredo Carneiro and Manoel Gomes de Mendonça Neto p. 45
Empirical Studies of Test Execution Effort Estimation Based on Test
Characteristics and Risk Factors
Eduardo Aranha and Paulo Borba p. 82
Software Defect Prediction Modeling
Burak Turhan p. 90
Decision Support Input and Analysis of Late Architecture Changes
Byron J. Williams p. 96
The effects of Software Design Complexity on Defects in Open Source Systems
Normi Sham Awang Abu Bakar and Clive Boughton p. 104

summary from session 2

some general topics that arise

presentation: # slides, # words, eyes and body, reading slides, walk toward light, colors, outline
Eugenia, Maria (#slides), Glauco (walks to...), Eduardo (table), Burak (color), outline (Byron)
when you have 10 minutes to present you should go directly to the point

relevance of the problem
- Eduardo/testing, Byron/architecture


generalization/threads to validity
- combination of studies (Eugenia, Glauco, Eduardo)

publishing agreement:
- see Eugenia's work, Burak
- in OSS see Normi's work
planning and risks in research:
- iterations (see Eugenia's work)
- how many studies are you going to do? (see Glauco's, see Eduardo (6 studies?), )
- how many hypotheses? (see Glauco's) from a statistical point of view you should only use a piece of data once. carefully when writing hypotheses.
- realistic? Eduardo (which characteristics would you remove?)
- research questions
Eduardo (avoid binary answers)
Burak's questions specific/general
- plan (Burak's picture, Byron's picture)
- start and finish
- how to choose the research method? we should have more focus on this

theory/literature:
see Glauco's work, very good in Byron's paper according to Eugenia,

reproducible?
- Eduardo's

- important to think about language
specially when reviewing hypotheses

- data collection and analysis procedure
see Burak's work
Normi's work/questionaire
open source easier (Normi) because there are no secret issues

- metrics
that change over time (for example programming language)
see Burak's work, cyclomatic complexity (see Byron's paper)

- commonalities
Normi, Eduardo, Burak, Byron
the work of Normi is metrics oriented

- importance of getting feedbacks/criticism from a wider community than supervisor and friends / peer review
interaction between students (by the blog?)

Wednesday, September 12, 2007

New Agenda

For organizational reasons, the durations of some sessions have been changed. Also, a tentative schedule for the presentations had to be provided. Here it is:

9:00 - 9:30 Plenary Introduction


9:30 - 10:30 2 Parallel Sessions

Session 1A
Experts: Marcela Genero and Forrest Shull

Creating Predictive Models from Automated Static Analysis Alerts to Identify Vulnerability- and Attack-Prone Components
Michael Gegick, North Carolina State University, USA

Empirical Studies of a Flexible Method for Software Effort Estimation by Analogy
Jingzhou Li and Guenther Ruhe, University of Calgary, Canada


Session 2A
Experts: Letizia Jaccheri and James Miller

Where Top-down Process Improvement Meets the Bottom-up problems?
Eugenia Egorova, Politecnico di Torino, Italy

Integrating Quantitative and Qualitative methods in Empirical Software Engineering
María Lázaro Gómez, Universidad Rey Juan Carlos, Spain


10:30 - 11:00 Coffee Break


11:00 - 12:30 2 Parallel Sessions

Session 1B
Experts: Marcela Genero and Forrest Shull

Understanding the Role of Coordinating Mechanisms in Software Development Teams
Nils Brede Moe, SINTEF ICT, Norway


Proposal of a Review Process of Empirical Studies in Software Engineering
Anna Grimán Padua, Universidad Simón Bolívar, Venezuela

Preliminary Results for a Scale that Measures the Quality of Controlled Experiments in Computing and Health Informatics
Keith Lui, University of Western Sydney, Australia



Session 2B
Experts: Letizia Jaccheri and James Miller

Using Visual Metaphors Based on Metrics and Heuristics to Enhance Software Comprehension Activities
Glauco de Figueiredo Carneiro and Manoel Gomes de Mendonça Neto, Universidade Salvador, Brazil


Empirical Studies of Test Execution Effort Estimation Based on Test Characteristics and Risk Factors
Eduardo Aranha and Paulo Borba, Federal University of Pernambuco, Brazil


12:30 - 13:30 Lunch

13:30 - 15:00 2 Parallel Sessions

Session 1C
Experts: Marcela Genero and Forrest Shull

Agile Processes and Aspects of Innovation in Software-based New Product Development
Tor Erlend Fægri, SINTEF ICT, Norway


Software Engineering Processes under the Influence of Aesthetics and Art Projects
Salah Uddin Ahmed and Anna Trifonova, NTNU, Norway


Session 2C
Experts: Letizia Jaccheri and James Miller

Software Defect Prediction Modeling
Burak Turhan, Bogazici University, Turkey


Decision Support Input and Analysis of Late Architecture Changes
Byron J. Williams, Mississippi State University, USA



15:00 - 15:30 Coffee break


15:30 - 16:30 2 Parallel Sessions

Session 1D
Experts: Marcela Genero and Forrest Shull

Aggregation Process with Multiple Evidence Levels for Experimental Studies in Software Engineering
Enrique Fernández, Universidad de Buenos Aires, Argentina

Discussion, lessons learned, and conclusions


Session 2D
Experts: Letizia Jaccheri and James Miller

The effects of Software Design Complexity on Defects in Open Source Systems
Normi Sham Awang Abu Bakar and Clive Boughton, Australian National University, Australia

Discussion, lessons learned, and conclusions



16:30 - 17:30 Plenary Conclusions

Tuesday, September 11, 2007

blogging and research

Dear students
I have been working with blogging and research for the last three years. It is my idea that we should use this blog to share information between us.

It is always difficult to be informal in research as research is a systematic process and there are a lot of formal rules around publishing. At the same time, communication and idea exchange must happen in informal settings too. I invite you to write on this blog, we are open to critical suggestions too.

Looking forward to meet you all in Madrid, Letizia Jaccheri

Thursday, September 6, 2007

Agenda

Dear All:

just a few notes on how IDoESE 2007 will be organized. The students participating in the Symposium can use this blog to post their comments on the papers they are required to read.

Sandro

Here is the schedule:

9:00 - 9:30 Plenary introduction

9:30 - 10:30 2 parallel sessions
10:30 - 11:00 Coffee Break
11:00 - 13:00 2 parallel sessions

13:00 - 14:00 Lunch

14:00 - 15:30 2 parallel sessions
15:30 - 16:00 Coffee break

16:00 - 17:00 Plenary session
17:00 - 17:30 Conclusions

Here is how the time reserved to each PhD Thesis will be organized.
The author of the PhD Thesis will be required to give a concise presentation about his or her work (about 10'). Then, two other PhD students will talk about the proposal of the paper, with two different roles. One of them will have the role of highlighting possible critical aspects of the paper. The other will have the role of highlighting positive aspects of the paper.
This implies that each student will therefore have 3 items of homework, to be done before the Symposium:

1) Prepare a presentation for his/her topic
2) Prepare to be an enthusiast for one other submission in the group
3) Prepare a critique of one other submission in the group.

To help in the reviewing work, here are a few guidelines (where applicable):

Key terms – Review the correctness and consistency of terms and definition and their meaning presented in the plan. Internal consistency of the terms and typos should also be reviewed.

Coverage of the background literature – Review the gaps identified from the literature review and which gaps are particularly tackled in the research study. Review whether motivation for the proposed study is based on the literature review. Review the main contributions identified in the literature relevant to the proposed study.

Scope – Review the scope of the study in terms of its applicability to the identified gap in the literature.

Research questions – Review the proposed research questions and how they were derived.

Research design – Review the proposed research design (including methods) and its relevance to the research question being studied.

How well are the objectives and research questions addressed by the empirical work in the plan?

Do the empirical study plans and arrangements correspond to the best practices in empirical research?

How are validity threats addressed?

Novelty of contribution – Review the novelty of contributions of the study. How is the generalizeability of the results handled?

What is missing from their paper, and why?

What needs to be expanded?

What could be deleted, or minimized?

Could someone replicate the study or reproduce the results?

Are the measurements technically or statistically valid? What biases exist? Can they be counteracted? What confounding variables or features exist?