IOI'2003 Competition Survey

Please fill this out at your convenience and return it to the box at the IOI Information Center.
Your feedback will help improve future IOI competitions.
  1. Are you a:

          100.0%(134) Contestant     or     0.0%(0) Delegation Leader?       0.0%(0) [blank]

  2. Environment:

    Check all combinations of operating systems, languages, and editors/debuggers that you used in either competition round:
    Windows editors Linux editors
      C     C++   Pascal
    rhide  7.5%(10)   20.1%(27)   0.7%(1) 
    emacs  0.0%(0)   3.0%(4)   0.7%(1) 
    Free Pascal IDE  32.8%(44) 
    other  1.5%(2)   3.7%(5)   3.0%(4) 
      C     C++   Pascal
    rhide  7.5%(10)   9.0%(12)   1.5%(2) 
    emacs  0.7%(1)   6.7%(9)   1.5%(2) 
    xemacs  1.5%(2)   5.2%(7)   0.0%(0) 
    vi  9.0%(12)   7.5%(10)   2.2%(3) 
    joe  0.7%(1)   0.0%(0)   0.0%(0) 
    other  0.0%(0)   2.2%(3)   0.7%(1) 

    Windows debuggers Linux debuggers
      C     C++   Pascal
    rhide  4.5%(6)   16.4%(22)   0.0%(0) 
    gdb  0.0%(0)   1.5%(2)   0.7%(1) 
    Free Pascal IDE  29.1%(39) 
      C     C++   Pascal
    rhide  6.7%(9)   6.0%(8)   1.5%(2) 
    gdb  6.7%(9)   9.0%(12)   0.0%(0) 
    ddd  0.7%(1)   3.0%(4)   0.0%(0) 
    Do you think it is useful to
    allow contestants to bring in:
      Yes     No     [blank]  
    dictionaries  63.4%(85)   34.3%(46)   2.2%(3) 
    keyboards  70.9%(95)   28.4%(38)   0.7%(1) 
    mice  37.3%(50)   59.0%(79)   3.7%(5) 
    Did you bring in:
      Yes     No     [blank]  
    dictionaries  6.7%(9)   91.0%(122)   2.2%(3) 
    keyboards  5.2%(7)   91.8%(123)   3.0%(4) 
    mice  0.0%(0)   96.3%(129)   3.7%(5) 
    One idea for the future is to run only linux in the competition, and provide a
    bootable CD-ROM with documentation for contestants to use for training.
      Yes     No     [blank]  
        Would you be satisfied using Linux if rhide and the Free Pascal IDE are provided?  56.7%(76)   32.8%(44)   10.4%(14) 
        Would you be satisfied using Linux with standard Linux editors (and no IDEs)?  31.3%(42)   60.4%(81)   8.2%(11) 

    Another idea for the future is to allow submissions in Java.
      Yes     No     [blank]  
        Would you use Java if it were provided?  14.2%(19)   84.3%(113)   1.5%(2) 
        Would you need a Java IDE?  16.4%(22)   75.4%(101)   8.2%(11) 
        Even if you did not use Java, would you support making it available to others?  70.1%(94)   26.9%(36)   3.0%(4) 

    Was it useful to have available on the competition server web page:    
      Yes     No     [blank]  
        IOI documents  85.1%(114)   14.2%(19)   0.7%(1) 
        Manuals on tools (e.g. rhide)  73.1%(98)   26.1%(35)   0.7%(1) 
        Programming language references  82.8%(111)   17.2%(23)   0.0%(0) 

    What other tools would you like to have available?

    • => Ultra Edit <=
    • A tool that shows exact process time and used memory of a program.
    • All in all satisfying - no complaints. Nice noise reduction keyboards.
    • Allow downloading of sample data instead of requiring contestants to enter the sample data.
    • Ask clarifications through server web page.
    • Better Windows text editors - RHIDE + Emacs are buggy and very unstable, and they lack functions like mouse wheel scrolling. Suggest: Edit Plus 2, Write Source, Bloodshed Dev C/C++.
    • Delphi
    • Far Manager
    • File Manager (Total Commander)
    • It would be beneficial to include MS Visual Studio as all of the Windows environments are in DOS Emulation and therefore are buggy.
    • Kate, Kdebugger, why not just install KDE?
    • Kate editor
    • Microsoft Visual C++
    • More test data; Delphi
    • Perl for generating test data.
    • Rhide on Windows worked really bad.
    • SciTE (Linux editor)
    • The IOI problems and source codes for every problem.
    • To have files of tests after the end of contest. GUI in IDE (not text-based).
    • Turbo Pascal
    • Visual Studio (less bugs than Rhide)
    • Well... the Rhide sure is buggy - thanks for the tips (in Addendum 2 or whatever)...
    • Windows (Total) Commander; standard I/O test program during competition.
    • Windows Commander, or any usable file manager instead of Windows Explorer.
    • Windows File Managers (like "Far Manager")
    • a Windows IDE for C++
    • a stable IDE for Windows
    • different programs for writing code such as Edit Plus 2 (it's freeware)
    • eclipse
    • energy drinks (red bull,...)
    • files with full texts of tests after competition
    • files with full texts of tests and correct answers
    • files with new texts of tests after competition
    • indexed help of programming language commands
    • love Microsoft Visual Studio
    • music
    • problems, sample test cases.
    • profilers
    • the Bloodshed Dev-C++ IDE for Windows (www.bloodshed.net)
    • vim in windows, visual studio, visual C++, Borland C++ Builder
    • visual C++
    • visual studio
    • visual studio (ver. 6 or 7)
    • visual studio.net, Visual C++ 6.0
    • winamp/xmms, games
    • winamp (with some mp3)

    Please give any other feedback about the environment.

    • Don't be so picky about missing CR/LFs...
    • Eclipse is a very good Java IDE. I think it also has some C/C++ support.
    • Everything was good.
    • FP IDE is unstable
    • Good idea to use Rembo.
    • I didn't like working with Pascal in Linux.
    • I do not like rhide, I was accustomed to use KATE (KDE Advanced Text Editor), so it's better for me to have KDE installed. I hate GNOME.
    • I think contestants should be given root access to the machines... I know this is risky, but it'll be really good and gives you the comfort of having full access on the machine.
    • I think that rhide and Free Pascal bring a lot of problems and it is necessary too that all contestants feel sure that the tools will work good.
    • I thought it was very good - I had no problems with it.
    • I used notepad as a Windows editor.
    • I would prefer a graphic IDE (not text-mode).
    • If allowing Java you should allow all .net languages too (C#, VB.NET).
    • Indentations were not turned on by default in VIM. I had to do it manually.
    • Install KDE!
    • Linux worked great.
    • No vim under Windows this year!
    • Please don't use RHIDE. Try Dev-C++ or something else.
    • Really bad, it has a lot of bugs.
    • Rhide & Linux can run better (no crashing). The web submission system can be improved.
    • Rhide debugger crashed really often, had to use gdb.
    • Rhide was terrible under Windows (crashes, etc.) and pretty bad under Linux (couldn't compile math.h functions).
    • Unstable FP IDE (forgets to recompile, forgets to disable breakpoints, indicates that program ended while still running, incorrect float type handling, ...) There was an old version of cygwin.dll provided with FP.
    • What about a graphical editor? (for Windows)
    • good
    • headphone
    • other is OK
    • rhide could not compile <math.h> functions
    • server showed lag (rather severe)
    • used notepad as Windows editor

  3. Tasks:

    Understandability
    Easy ... Hard     [blank]
    Difficulty
    Easy ... Hard     [blank]
    Enjoyable?
    Loved ... Hated     [blank]
    Path Maintenance
     44.8%(60)   29.1%(39)   11.9%(16)   6.7%(9)   3.7%(5)   3.7%(5) 
     35.8%(48)   38.1%(51)   13.4%(18)   10.4%(14)   0.7%(1)   1.5%(2) 
     22.4%(30)   27.6%(37)   33.6%(45)   5.2%(7)   8.2%(11)   3.0%(4) 
    Comparing Code
     29.1%(39)   36.6%(49)   12.7%(17)   11.9%(16)   6.7%(9)   3.0%(4) 
     1.5%(2)   4.5%(6)   7.5%(10)   43.3%(58)   41.0%(55)   2.2%(3) 
     6.7%(9)   7.5%(10)   29.1%(39)   25.4%(34)   26.9%(36)   4.5%(6) 
    Reverse
     32.1%(43)   31.3%(42)   19.4%(26)   11.9%(16)   2.2%(3)   3.0%(4) 
     3.0%(4)   11.2%(15)   47.0%(63)   24.6%(33)   12.7%(17)   1.5%(2) 
     14.2%(19)   24.6%(33)   41.0%(55)   10.4%(14)   6.7%(9)   3.0%(4) 
    Guess Which Cow
     58.2%(78)   22.4%(30)   14.2%(19)   2.2%(3)   0.0%(0)   3.0%(4) 
     23.9%(32)   39.6%(53)   24.6%(33)   6.7%(9)   3.0%(4)   2.2%(3) 
     30.6%(41)   35.8%(48)   20.9%(28)   4.5%(6)   5.2%(7)   3.0%(4) 
    Amazing Robots
     24.6%(33)   26.9%(36)   28.4%(38)   11.9%(16)   5.2%(7)   3.0%(4) 
     6.0%(8)   6.0%(8)   28.4%(38)   38.8%(52)   19.4%(26)   1.5%(2) 
     11.2%(15)   14.9%(20)   29.9%(40)   18.7%(25)   20.9%(28)   4.5%(6) 
    Seeing the Boundary
     40.3%(54)   29.1%(39)   18.7%(25)   3.7%(5)   5.2%(7)   3.0%(4) 
     6.7%(9)   12.7%(17)   33.6%(45)   29.9%(40)   14.2%(19)   3.0%(4) 
     8.2%(11)   13.4%(18)   28.4%(38)   16.4%(22)   29.1%(39)   4.5%(6) 

    Which task did you like most?

     26.1%(35) 
    maintain
     8.2%(11) 
    code
     14.9%(20) 
    reverse
     28.4%(38) 
    guess
     11.9%(16) 
    robots
     7.5%(10) 
    boundary
     3.0%(4) 
    [blank]

    Which task did you like least?

     3.0%(4) 
    maintain
     31.3%(42) 
    code
     9.7%(13) 
    reverse
     2.2%(3) 
    guess
     20.1%(27) 
    robots
     27.6%(37) 
    boundary
     6.0%(8) 
    [blank]

  4. Grading System:

    Usability
    Easy ... Hard     [blank]
    Functionality
    Good ... Bad     [blank]
    Responsiveness
    Fast ... Slow     [blank]
    Submission
     75.4%(101)   14.9%(20)   7.5%(10)   2.2%(3)   0.0%(0)   0.0%(0) 
     64.9%(87)   20.9%(28)   7.5%(10)   5.2%(7)   0.0%(0)   1.5%(2) 
     49.3%(66)   28.4%(38)   14.2%(19)   6.0%(8)   2.2%(3)   0.0%(0) 
    Test Runs
     55.2%(74)   14.2%(19)   16.4%(22)   3.0%(4)   1.5%(2)   9.7%(13) 
     40.3%(54)   20.1%(27)   20.1%(27)   3.7%(5)   3.7%(5)   11.9%(16) 
     43.3%(58)   24.6%(33)   14.2%(19)   6.0%(8)   2.2%(3)   9.7%(13) 
    Print/Backup
     61.2%(82)   15.7%(21)   6.7%(9)   0.7%(1)   0.0%(0)   15.7%(21) 
     53.7%(72)   14.9%(20)   7.5%(10)   3.7%(5)   2.2%(3)   17.9%(24) 
     38.1%(51)   17.2%(23)   21.6%(29)   3.7%(5)   3.7%(5)   15.7%(21) 
    Analysis Mode
     43.3%(58)   14.9%(20)   14.9%(20)   3.0%(4)   7.5%(10)   16.4%(22) 
     34.3%(46)   10.4%(14)   24.6%(33)   6.7%(9)   6.7%(9)   17.2%(23) 
     42.5%(57)   17.9%(24)   16.4%(22)   0.7%(1)   6.0%(8)   16.4%(22) 


    Presentation
    Good ... Bad        [blank]
    Content
    Good ... Bad        [blank]
    Printed score sheets
     50.7%(68)   26.1%(35)   12.7%(17)   0.7%(1)   0.7%(1)   9.0%(12) 
     48.5%(65)   23.9%(32)   15.7%(21)   0.7%(1)   1.5%(2)   9.7%(13) 
    Online grading results
     36.6%(49)   26.9%(36)   14.9%(20)   1.5%(2)   2.2%(3)   17.9%(24) 
     37.3%(50)   25.4%(34)   13.4%(18)   2.2%(3)   3.0%(4)   18.7%(25) 

      Yes     No     [blank]  
    Did you use analysis mode?  47.8%(64)   40.3%(54)   11.9%(16) 
    Was analysis mode helpful to you?      36.6%(49)   42.5%(57)   20.9%(28) 

    Should analysis mode have additional features? (If so, what?)

    • All failed tests should be shown.
    • Allow any of the available sets of data to be run against submitted program instead of halting execution at the first incorrect test case.
    • Analysis mode should not stop on first incorrect case, it should print all results.
    • Complete list of test cases, program output, and correct answers.
    • Correct ouput for unsolved input files. Test mode shouldn't stop at first mistake.
    • Display all results, allow testing of reactive tasks. Display correct answers, allow output to be submitted.
    • Don't stop if one case fails.
    • Don't stop on the first wrong test case or timeout or memory error. Test all the cases anyways.
    • For problems on optimality, show the best submission.
    • I couldn't see what my program did on harder inputs if it crashed on sooner one (grader stopped on my first incorrect output in analysis mode)
    • I didn't find the HELP and there wasn't such 'arccot' function... also would like to have pens instead of pencils and grid paper. Online question submission/answering.
    • I didn't use it, but I saw it. It stops testing after a single test case fails. It will be more useful if it tests with all test cases.
    • I don't even notice any analysis mode.
    • I don't know what analysis mode is.
    • I don't know which it is.
    • I don't know.
    • I haven't used analysis mode yet.
    • In analysis mode you should grade all cases even if one times out. I would like runtimes as well in online grading results.
    • It is possible to use non-web interface which is faster and more usable.
    • It should be not USACO-like (testing stops at the first incorrectly solved input), but the program should be run against the whole test set, and summary should be displayed.
    • More run info, i.e. time/mem. used by prog on each test.
    • Run all the test data even if some test data are failed before.
    • See interactive input answers.
    • Should be able to run all test cases (it stopped after 1st incorrect output).
    • Should be able to see all the test cases.
    • Should be able to see the results on all the test cases, not just the first one wrong.
    • Should not stop testing when it fails a test case.
    • Show info of all input files.
    • Show results on all test cases.
    • Test specific test cases.
    • The ability to choose which test case to run, not just start from #1. Also to be able to see a case, but not run it.
    • The user should be able to choose which tests should be run.
    • To analyze all the inputs, not only stop after first error.
    • What is analysis mode?
    • When a source is submitted, grade on all tests (not only until first failed test).
    • Yes, include full text of tests.
    • You should be able to get the score your submitted program would have gotten on tasks that give credit for optimality. You could not get the total score. (e.g. robots, guess, reverse)
    • You should be able to skip some incorrect test data (it stoped on the first bad ouput but I wanted another data).
    • easier submission for problems like reverse
    • full tests' texts
    • full texts of tests
    • not stop if wrong
    • run program on specific test
    • run separated tests
    • test cases more easily accessible
    • test individual test cases

    Please give any other feedback about the grading system.

    • A lot of feedback on idiot mistakes like wrong format of output and likewise. Nice.
    • Add the possibility to upload more files at once (for output-only tasks).
    • Don't show the first fail test case only in analysis mode.
    • Dynamic program should be more explained.
    • Everything was good.
    • Everything went great, except for printing reponsiveness.
    • Give grid paper instead of lined.
    • Haven't seen the online grading results yet.
    • I propose to return to increase costs of tests due to their difficulty.
    • It's okay.
    • It's quite good and fast. I think the results should be out much earlier than 5 o'clock - say 3 or something like that.
    • It is really good. It can be used as it is in the next year.
    • It should be possible to submit programs that don't pass the sample input.
    • It was easy to use.
    • It worked well.
    • It would be better to have interactive tasks on-server test runs.
    • Online (instant) updating of competition clock.
    • Password should be easy to understand.
    • Please allow submission of multiple files (e.g. as zip). When I viewed online grading results, mozilla just displayed some html code.
    • Printing was a little slow but altogether it was great.
    • Problem statements are too long.
    • Should not get stuck on whitespace on output problems.
    • Show full details of all failed test cases.
    • Submission of multiple files (for output-only) could be better.
    • Test run should also be useful for reactive tasks.
    • The printed sheet should print out my answer and the correct if mine is incorrect.
    • Very good
    • Why was disabled running progams on custom data sets in analysis mode?
    • Would be useful to have the ability to do custom test runs for standard input/ouput questions.
    • You should make available test run for reactive tasks (either the user writes the responses (path maint.) or a program (guess)).
    • ability to submit zip file with multiple output or source code files
    • generally very good.
    • good
    • normal
    • worked well

  5. Please give any other feedback you have about the IOI 2003 Competition.

    • About tasks: the texts are longer every year giving too much useless info, for instance path maintenance could be defined in one sentence (output the length of current MST). This shouldn't be a reading comprehension contest! Some info is also hard to spot, because of the "garbage", there is also too much translating to do.
    • All the other contests I attended that provided printing word-wrapped the source code. I couldn't see the end of my long lines of source.
    • All was good.
    • Better windows editors please. Putting $20 on our swipe cards was genius!
    • Competition environment was really good. Problems were also not that hard.
    • Competition room environment was much much better. The placings were really good. Could work in less pressure than in past IOIs.
    • Everything has been great and very organized. The food has been excellent and the freetime occupations have been imaginary and amusing. The trip to Chicago was nice as DK was allowed to walk by themselves. But otherwise (on that trip only) we were treated a bit too much like kids... All in all a great IOI, almost nothing we wished for was missing and there wasn't time where we were bored. Great work.
    • Excellent. Problems were interesting in difficulty, but still approachable.
    • Give us good tasks. It's not good that someone who spends a lot of time finding a good algorithm gets less points than someone who just makes a fake program. I think it would be good to have only Linux in the competition. I'm working with Windows and I had a lot of problems with it. I'm lazy to learn to use Linux but if it would be only available then I would learn that.
    • Great experience, extremely well organized. But please install KDE!
    • Great fun. I didn't like the playfair.
    • I enjoyed it.
    • I liked all of the tasks :)
    • I liked it!
    • I think that the IOI of this year was one of the competitions that didn't fill the expectations that it has when the IC chose USA for this event, the organization was bad, the food was bad and my score was bad too, so...
    • I thought it was great.
    • I would like if you could save the .emacs file when reinstalling the system. I was quite tired of reconfiguring every day. BTW: Thanks for the arrangement. I have really enjoyed even though it went bad to the ~ competition.
    • I would like to use another environment like Borland or Visual C.
    • It'd be better if the translations were late at night, for more security. And also, scoring should be more balanced. For example a silly brute force shouldn't get about 50% of test cases.
    • It all worked very well. The problems were nice and varied, and the environment was superb. Everything came together smoothly.
    • It was OK.
    • It was a good competition after all. The tasks could (and should) be a little harder. There are so many contestants that swear that changing 3 bytes of their code will make their program work for 100 or 90, while they have 10 or 20. So I think the people in the head rankings will not necessarily be the best programmers, rather the ones with the least number of implementation errors. Many people solved the tasks, but do not score good because of bugs, while middle-class programmers easily overtoook them. That's why I think the problems should be just a bit harder.
    • It was pretty cool although I didn't score that good. :O) Thank You!
    • Keep it going!
    • More guides, and sweater.
    • More security should be taken to prevent leaking of questions.
    • No more computational geometry.
    • Not more than one reactive task.
    • Online grading results don't exist yet.
    • Problem scoring not balanced at all. Small difference between easy and super-hard solutions.
    • Really nice, well organized, enough program and fun.
    • Rhide debugger mode in Win XP is very very slow. The others are good.
    • Security is very lax. Problems were in countries' letter boxes before the competition and contestants had free access to them. Also, team leaders were with team members during the quarantine periods. Although contestants were not suppossed to be in the computer lab during quarantine, there were many contestants throughout the night.
    • Thank you for all the efforts you put into the competition to make it for us as enjoyable as possible.
    • The den is great (free pool, foosball, and bowling!!!). Also liked the cinema (Animatrix + free popcorn and drink). Liked the availability of doing different sports. Would be happy to see a match of baseball or football (since it is a typical USA sport).
    • The problems shoud be easier to code and debug and harder to solve.
    • The tasks were OK except for the language used. The language should be simpler, so that contestants from most countries undertand the English version from the first reading. The Linux environment should be more stable. Rhide should run at its best both under the console and X. Submission for output-only tasks should be made easier. Test runs of some sort should be available for reactive tasks. Tasks CODE and BOUNDARY should have had more examples. Submission should accept a file even if example is not solved. The sever was down for about 4 minutes. This could be solved by compiling without "-static" and enforcing a tighter time bound for solving the example.
    • Too much emphasis on cows. :O) There should have been more casino nights.
    • We got along all right without a guide but this would be difficult in a non-English speaking country. Having a guide for each country is definitely a good idea.
    • You should provide custom test runs for interactive problems such as "Maintain" where the user could easily supply a data file containing all useful information. Also, give users the option of submitting a program even if it doesn't pass the sample test case.
    • good accommodation and food
    • good entertainment and good food
    • great.