Page 1 of 1

Max number of wu-s per host ?

Posted: Tue Feb 08, 2011 9:08 pm
by Snakeman
Hi,

Is there a max number of wu-s per host that are given out at a time ? Problem is that at any given time my host is getting only 16 wu-s at a time. It leaves 8 cores out of work in my case, plus it creates a lot of overhead as host is constantly uploading/downloading wu-s, although i have specified that it should download enough for 5 days. I like to run this project as it has short wu completion times and I can compete with others as this project (yet) does not support neither Cuda nor Stream but as i only get enough wu-s for 2/3 of my host I must run other projects too.

P.S. If it takes lot of effort to change the number of wu-s out, do not bother as this host will be computing for just a week or so. Then it will go and do it's designated job :)
Edit: Running 64bit linux client, if this is configuration mishap, i would gladly reconfigure my client. Although in Seti@home it gets wu-s for all 24 cores.

Re: Max number of wu-s per host ?

Posted: Tue Feb 08, 2011 9:16 pm
by chill
We had set the max wu in progress to 16 because the largest computers we have seen had only 8 cores. Now that you have 24 cores I have increased this number to 48. The work units that we make depend on the current state of our simulations and so we cannot generate many days of work at once. We only want to give hosts enough workunits to keep them 100% busy which should be 2*number of cores.

Let me know if this change has properly gone into effect and if everything is running well.

Re: Max number of wu-s per host ?

Posted: Wed Feb 09, 2011 12:06 am
by Snakeman
Hi, at your top 100 comp list there are many with more than 8 cores tbh :) For example -
http://eon.ices.utexas.edu/eon2/show_ho ... hostid=170 - 16 cores, and many more with 16, but this comp is on the first place on stats :)
http://eon.ices.utexas.edu/eon2/show_ho ... ostid=6544 - 32 cores :O, and 3 more from University of Houston with similar configuration.
http://eon.ices.utexas.edu/eon2/show_ho ... stid=10423 - 24, almost like my setup.

Will check my rig in 7 hours, then back at work :) Will let you know if i can now drop other projects :P

Re: Max number of wu-s per host ?

Posted: Wed Feb 09, 2011 12:15 am
by Ananas
Assuming that it's a twin hex-core Xeon with HT, it's worth a try to reduce "On multiprocessors, use at most xx processors" by one so one thread is dedicated to the operating system. I have a better overall throughput with an L5520, if I do not allow 16 concurrent tasks but set it to run only 15.

I'm not sure if this applies to Linux as well (I run XP x64) and it might make a difference which projects you run on that monster.

Re: Max number of wu-s per host ?

Posted: Wed Feb 09, 2011 12:03 pm
by Snakeman
You assume correct - this is twin hex-core Xeon with HT. As it is right now running only clean install of 64bit Ubuntu 10.04 LTS server and Boinc i do not think that reserving one core to OS would give a lot of gain.
And now after yesterdays change it gets enough wu-s for all it's cores.

If I were developer of eOn software i would put the limit into client so it would always get 2*CPU (or even 3?) of wu as it would keep the software running even if there is network outage for more than 30 minutes. Right now there are moments where all tasks are at state "Ready to report" as host crunches through these small wu-s in about 5 minutes and if there is some kind of comm problem it is out of job.

Re: Max number of wu-s per host ?

Posted: Fri Feb 11, 2011 6:55 pm
by Finnen78
I have since earlier experiments noticed that if you allocate, say all but one core to boincing ur computer speeds up dramaticly in handling, unfortunatly i dont have an uber setting of more than 6 cores on a single cpu platform, but even still, i saw a significant increase in speed & handling. Most of the Boinc "researcher/developers" release wu's around 1 hour boinc time, and ive done a few monsters of 100+ hours. so these cute 5-15 min wu's are rather fun. :-) , still i think not much so if ur not connected to the net 25/7 like this comp of mine is.

I used a small program called Core affinity and put boincing on cores 1-4 & since im a gamer :P i put system resources and games on core 5-6, frankly speaking. I havent seen many programs yet that are multicore dependant, and only a very few games, even the 64-bit ones.

before i did this with the cores, every project was around 10-15 minutes longer since all resources went to boinc... not systems handling.

Snakeman, perhaps u should release one of ur cores from Boinc:ing to systems handling for beter performance

*ps. can i steal ur comp? ;-) lol

Re: Max number of wu-s per host ?

Posted: Sat Feb 12, 2011 12:09 am
by Snakeman
Finnen78 wrote:before i did this with the cores, every project was around 10-15 minutes longer since all resources went to boinc... not systems handling.
Snakeman, perhaps u should release one of ur cores from Boinc:ing to systems handling for beter performance
If i were using M$ OS like you are I would. As this system right now is running just very basic linux server with actually just SSHServer package and boinc-client installed I still do not think that there would be any gain in reserving one core.
*ps. can i steal ur comp? ;-) lol
Sure :) For abt 9K EUR :P Tbh it is nothing to brag about - just usual dell R510 server with 2x Intel Xeon X5670 processors. Each processor has 6 physical cores and each core has HT - 2x6x2=24. In many other projects where ATI Stream or Nvidia Cuda are implemented just 1 graphic card beats all my 24 cores. So the future of distributed computing lies in GPU power. I joined this project only because of 2 factors -
1) No GPU support - I can at least compete a little :)
2) Short wu-s - I had to reinstall that comp for tweaking multiple times (and still do) - with so short wu-s i can just tell not to fetch new wu-s and wait for 5 minutes. With some other projects i would just waste 2 days of work.

For oldtimers - dont worry, i will not be using this comp for this project for long - so i will not beat all of you :) Just right now it felt like wasting a lot of power to let that thing sit in a box for another few weeks.