IOI'2003 Competition Survey

Please fill this out at your convenience and return it to the box at the IOI Information Center.
Your feedback will help improve future IOI competitions.
  1. Are you a:

          0.0%(0) Contestant     or     100.0%(29) Delegation Leader?       0.0%(0) [blank]

  2. Environment:

    Check all combinations of operating systems, languages, and editors/debuggers that you used in either competition round:
    Windows editors Linux editors
      C     C++   Pascal
    rhide  10.3%(3)   13.8%(4)   6.9%(2) 
    emacs  0.0%(0)   0.0%(0)   0.0%(0) 
    Free Pascal IDE  13.8%(4) 
    other  3.4%(1)   3.4%(1)   6.9%(2) 
      C     C++   Pascal
    rhide  10.3%(3)   13.8%(4)   6.9%(2) 
    emacs  3.4%(1)   3.4%(1)   3.4%(1) 
    xemacs  3.4%(1)   3.4%(1)   3.4%(1) 
    vi  10.3%(3)   6.9%(2)   3.4%(1) 
    joe  3.4%(1)   3.4%(1)   3.4%(1) 
    other  6.9%(2)   6.9%(2)   6.9%(2) 

    Windows debuggers Linux debuggers
      C     C++   Pascal
    rhide  6.9%(2)   13.8%(4)   6.9%(2) 
    gdb  0.0%(0)   0.0%(0)   0.0%(0) 
    Free Pascal IDE  13.8%(4) 
      C     C++   Pascal
    rhide  6.9%(2)   10.3%(3)   6.9%(2) 
    gdb  6.9%(2)   13.8%(4)   6.9%(2) 
    ddd  3.4%(1)   3.4%(1)   3.4%(1) 
    Do you think it is useful to
    allow contestants to bring in:
      Yes     No     [blank]  
    dictionaries  75.9%(22)   17.2%(5)   6.9%(2) 
    keyboards  75.9%(22)   17.2%(5)   6.9%(2) 
    mice  55.2%(16)   34.5%(10)   10.3%(3) 
    Did you bring in:
      Yes     No     [blank]  
    dictionaries  3.4%(1)   48.3%(14)   48.3%(14) 
    keyboards  6.9%(2)   44.8%(13)   48.3%(14) 
    mice  6.9%(2)   44.8%(13)   48.3%(14) 
    One idea for the future is to run only linux in the competition, and provide a
    bootable CD-ROM with documentation for contestants to use for training.
      Yes     No     [blank]  
        Would you be satisfied using Linux if rhide and the Free Pascal IDE are provided?  48.3%(14)   24.1%(7)   27.6%(8) 
        Would you be satisfied using Linux with standard Linux editors (and no IDEs)?  37.9%(11)   34.5%(10)   27.6%(8) 

    Another idea for the future is to allow submissions in Java.
      Yes     No     [blank]  
        Would you use Java if it were provided?  27.6%(8)   48.3%(14)   24.1%(7) 
        Would you need a Java IDE?  17.2%(5)   58.6%(17)   24.1%(7) 
        Even if you did not use Java, would you support making it available to others?  72.4%(21)   10.3%(3)   17.2%(5) 

    Was it useful to have available on the competition server web page:    
      Yes     No     [blank]  
        IOI documents  72.4%(21)   0.0%(0)   27.6%(8) 
        Manuals on tools (e.g. rhide)  75.9%(22)   0.0%(0)   24.1%(7) 
        Programming language references  75.9%(22)   0.0%(0)   24.1%(7) 

    What other tools would you like to have available?

    • A list of bugs of Rhide and how to solve them.
    • Far Manager (www.rarsoft.com)
    • Please install common GUI programmers' editors, e.g., Kate. Also Kdbg.
    • Win: Far Manager or similar
    • full texts of tests (after competition)
    • test jigs for interactive questions
    • vi on Windows system was missing. In Korea vi on Windows systems was also installed.

    Please give any other feedback about the environment.

    • Environment should be realeased long before the actual date so that we can set it up for contestants to practice.
    • Used many other windows/Linux debuggers and editors in the first round which is held at students' own schools with whatever environment is available. Used Kate as Linux editor and Kdbg as Linux debugger.

  3. Tasks:

    Understandability
    Easy ... Hard     [blank]
    Difficulty
    Easy ... Hard     [blank]
    Enjoyable?
    Loved ... Hated     [blank]
    Path Maintenance
     27.6%(8)   20.7%(6)   27.6%(8)   17.2%(5)   3.4%(1)   3.4%(1) 
     27.6%(8)   41.4%(12)   17.2%(5)   10.3%(3)   0.0%(0)   3.4%(1) 
     37.9%(11)   31.0%(9)   20.7%(6)   3.4%(1)   0.0%(0)   6.9%(2) 
    Comparing Code
     34.5%(10)   17.2%(5)   24.1%(7)   13.8%(4)   6.9%(2)   3.4%(1) 
     0.0%(0)   6.9%(2)   10.3%(3)   48.3%(14)   31.0%(9)   3.4%(1) 
     10.3%(3)   13.8%(4)   37.9%(11)   20.7%(6)   10.3%(3)   6.9%(2) 
    Reverse
     37.9%(11)   20.7%(6)   31.0%(9)   0.0%(0)   6.9%(2)   3.4%(1) 
     6.9%(2)   10.3%(3)   41.4%(12)   24.1%(7)   13.8%(4)   3.4%(1) 
     41.4%(12)   27.6%(8)   17.2%(5)   6.9%(2)   0.0%(0)   6.9%(2) 
    Guess Which Cow
     24.1%(7)   41.4%(12)   13.8%(4)   6.9%(2)   6.9%(2)   6.9%(2) 
     13.8%(4)   20.7%(6)   41.4%(12)   13.8%(4)   3.4%(1)   6.9%(2) 
     20.7%(6)   55.2%(16)   17.2%(5)   0.0%(0)   0.0%(0)   6.9%(2) 
    Amazing Robots
     20.7%(6)   17.2%(5)   41.4%(12)   6.9%(2)   10.3%(3)   3.4%(1) 
     0.0%(0)   13.8%(4)   31.0%(9)   24.1%(7)   24.1%(7)   6.9%(2) 
     13.8%(4)   20.7%(6)   24.1%(7)   17.2%(5)   17.2%(5)   6.9%(2) 
    Seeing the Boundary
     34.5%(10)   24.1%(7)   20.7%(6)   13.8%(4)   3.4%(1)   3.4%(1) 
     3.4%(1)   20.7%(6)   34.5%(10)   24.1%(7)   13.8%(4)   3.4%(1) 
     3.4%(1)   37.9%(11)   24.1%(7)   17.2%(5)   10.3%(3)   6.9%(2) 

    Which task did you like most?

     24.1%(7) 
    maintain
     3.4%(1) 
    code
     34.5%(10) 
    reverse
     17.2%(5) 
    guess
     17.2%(5) 
    robots
     0.0%(0) 
    boundary
     3.4%(1) 
    [blank]

    Which task did you like least?

     6.9%(2) 
    maintain
     17.2%(5) 
    code
     6.9%(2) 
    reverse
     0.0%(0) 
    guess
     37.9%(11) 
    robots
     20.7%(6) 
    boundary
     10.3%(3) 
    [blank]

  4. Grading System:

    Usability
    Easy ... Hard     [blank]
    Functionality
    Good ... Bad     [blank]
    Responsiveness
    Fast ... Slow     [blank]
    Submission
     34.5%(10)   0.0%(0)   3.4%(1)   0.0%(0)   0.0%(0)   62.1%(18) 
     27.6%(8)   3.4%(1)   6.9%(2)   0.0%(0)   0.0%(0)   62.1%(18) 
     20.7%(6)   10.3%(3)   0.0%(0)   0.0%(0)   0.0%(0)   69.0%(20) 
    Test Runs
     31.0%(9)   0.0%(0)   3.4%(1)   3.4%(1)   0.0%(0)   62.1%(18) 
     24.1%(7)   3.4%(1)   6.9%(2)   0.0%(0)   3.4%(1)   62.1%(18) 
     20.7%(6)   3.4%(1)   3.4%(1)   3.4%(1)   0.0%(0)   69.0%(20) 
    Print/Backup
     31.0%(9)   3.4%(1)   3.4%(1)   0.0%(0)   0.0%(0)   62.1%(18) 
     24.1%(7)   6.9%(2)   6.9%(2)   0.0%(0)   0.0%(0)   62.1%(18) 
     17.2%(5)   10.3%(3)   0.0%(0)   0.0%(0)   3.4%(1)   69.0%(20) 
    Analysis Mode
     24.1%(7)   13.8%(4)   3.4%(1)   0.0%(0)   0.0%(0)   58.6%(17) 
     17.2%(5)   13.8%(4)   6.9%(2)   3.4%(1)   0.0%(0)   58.6%(17) 
     20.7%(6)   6.9%(2)   3.4%(1)   3.4%(1)   0.0%(0)   65.5%(19) 


    Presentation
    Good ... Bad        [blank]
    Content
    Good ... Bad        [blank]
    Printed score sheets
     58.6%(17)   6.9%(2)   6.9%(2)   0.0%(0)   3.4%(1)   24.1%(7) 
     58.6%(17)   10.3%(3)   6.9%(2)   0.0%(0)   0.0%(0)   24.1%(7) 
    Online grading results
     34.5%(10)   10.3%(3)   10.3%(3)   0.0%(0)   0.0%(0)   44.8%(13) 
     31.0%(9)   10.3%(3)   13.8%(4)   0.0%(0)   0.0%(0)   44.8%(13) 

      Yes     No     [blank]  
    Did you use analysis mode?  41.4%(12)   24.1%(7)   34.5%(10) 
    Was analysis mode helpful to you?      48.3%(14)   0.0%(0)   51.7%(15) 

    Should analysis mode have additional features? (If so, what?)

    • Analysis mode was much more useful with the additions after the second day.
    • Analysis test runs should not stop on the first error. An option to run test cases one by one would be a good way to solve the problem. Other than that, we're happy.
    • Show results on all test cases; run through all cases without stopping at first error.
    • To show all the results in all the cases.
    • When a contestant's program is resubmitted, it only tests it up to the first case where it fails. However, we want it to test the program against all test cases (regardless of its correctness).
    • full texts of tests

    Please give any other feedback about the grading system.

    • I'm a leader; I would've used Linux.
    • I propose to return to increasing scores for test due to their difficulty. Also, I propose not to include tests with answers which can be guessed without solving (this was in "code" and in "maintain").
    • count down clock, w/o refreshing

  5. Please give any other feedback you have about the IOI 2003 Competition.

    • A badly organized IOI. Especially on security of the problem set. Some of the medal recipients are not worthy winners because cheating has definitely happened. You could say that you are trusting that all will be honest, but you are tempting people to commit sin, and that is no good. The responsibilty lies with the organizer to ensure a fair contest. In this respect, you failed totally.
    • All was conducted very well.
    • Asian food should be provided. :O)
    • During the task translation it was very easy for the contestants to contact their leaders or the guests. I think this wasn't good. The network drive was available for the students!!! These things were much better organized at the last IOIs. There was no communication room to meet the leaders of the other countries, no possibility to go to a pub. The trip to Chicago was very nice.
    • English versions of the tasks are prepared quite badly: lots of formatting and stylistic issues. Our suggestion for the ISC is to spend much more time improving task statements. However it was a really good idea to start translation earlier.
    • I'm not sure if the following comment comes from within the scope of this survey, but I'll make it anyway. I think that lately too much emphasis on being placed on having the GA meetings proceed "efficiently" and finish quickly. I think this has brought us, on several occasions, dangerously close to actually stifling useful discussions. I personally wouldn't mind if the GA meetings lasted longer, if this gave the delgates more time for discussion. As things stand now the GA seems to be moving in the direction of becoming a body that expresses itself by acclamation - by approving whatever has been placed before it by bodies such as the IC and ISC - rather than being a body that actually conducts meaningful debate and nontrivial voting.
    • Lack of security: separation between students and leader was insufficient. Very helpful staff.
    • Poor survey form - many questions are not applicable to non-students. Perhaps separate surveys for leaders/students.
    • The practice session was too short. Also, having each competitor in different room made it difficult to coordinate their experiences, transfer observations/questions from one student to another, etc.
    • The practice session was way too short. In effect it lasted only an hour and ten minutes. I would've liked 2 full hours for the students to get used to the environment and an additional 30 minutes for the analysis mode.
    • V.G.
    • Why did you have the leaders fill this out? The questions are inappropriate. Interactive problems should use stdin/stdout.