Portail de l'AF

Nouvelles

Projet du Mois FB: Yoyo@home

Faites un don

Shoutbox

JeromeC:
2024-09-12, 21:09:09
(tu peux d'ailleurs changer le thème)
JeromeC:
2024-09-12, 21:08:45
Re-bienvenu sur la nouvelle version du forum :)
zelandonii:
2024-09-11, 20:34:12
Très sympa cette nouvelle interface.
zelandonii:
2024-09-11, 20:34:00
Ça tourne du tonnerre !
zelandonii:
2024-09-11, 20:33:46
Pour faire plus simple, j'ai remplacé le waterblock par un ventilateur et j'ai rajouté deux ventilos.
zelandonii:
2024-09-11, 20:33:11
Quelques semaines que je n'étais pas venu pour cause de panne du PC. C'était le watercooling qui n'avait plus de liquide.
zelandonii:
2024-09-11, 20:32:11
Salut à tous !
JeromeC:
2024-09-10, 10:08:05
Autre option : on déménage tous au Groenland et voila.
ousermaatre:
2024-09-08, 19:21:28
 :hello: Meuh non, il y aura tjrs des raids, seulement, ils seront peut-être, plus souvent sur les mêmes thèmes.
[AF>Libristes] alain65:
2024-09-08, 18:02:24
Ça serait dommage, c'est la seule compète à laquelle j'ai le temps de participer. On bascule sur les projets du raid...Et plus qu'à attendre.
JeromeC:
2024-09-08, 16:15:50
Au train où vont les choses on fera bientôt plus aucun RAID alors...... :/
[AF>Libristes] alain65:
2024-09-07, 06:05:51
Une idée comme ça en passant : Le Raid d'automne au tout début de l'hiver et le raid de printemps à la fin  :siflotte:
[AF>Libristes] alain65:
2024-09-07, 06:04:21
@modesti : Ça c'est une bonne nouvelle ;)
modesti:
2024-09-06, 11:43:05
Petite info pour les fans de raid : compte tenu des températures (même si elles ont baissé un peu), le raid d'automne sera transformé en raid d'hiver
modesti:
2024-09-03, 10:48:11
Et le retour d'ARP  :bounce:
Maeda:
2024-09-03, 09:51:08
Un nouveau projet pour WCG pour la fin d'année ? https://www.worldcommunitygrid.org/forums/wcg/viewthread_thread,46744_offset,0
Maeda:
2024-08-28, 08:55:36
Prêt !
modesti:
2024-08-26, 15:13:48
Salut les AFones ! :hello: Prêts pour la rentrée ?
modesti:
2024-08-24, 11:11:06
Je confirme: ça marche! Merci beaucoup Sébastien :jap:
Maeda:
2024-08-24, 08:42:05
C'était ça, ça marche :jap:
Sébastien:
2024-08-24, 08:28:08
J'ai désactivé le rafraîchissement automatique de la shoutbox
Maeda:
2024-08-23, 21:59:28
@Sébastien : je ne suis pas sûr que tu aies saisi le souci soulevé par modesti. Si on ne touche à rien (pas de clic) dans la shoutbox, mais qu'on scroll vers le bas pour lire, au bout de ~2sec d'arrêt (pour lire), il remonte tout en haut (peu pratique
Sébastien:
2024-08-23, 19:34:16
Non ce n'est pas possible. Il n'y a pas de notion lu / non-lu pour les messages de la shoutbox.
modesti:
2024-08-22, 16:00:15
:hello: Petite question: il y aurait moyen que la shoutbox ne revienne pas automatiquement au dernier message posté pendant qu'on se met à jour de lecture ?
JeromeC:
2024-08-18, 22:32:17
Ah bah voilà je prends une semaine de congés je reviens et paf tout est migré ! Je préviens : je repars dans une semaine pour un semaine à nouveau, j'attends encore mieux à mon retour !!  :D  :jap: (et en plus ça marche nickel sur mon phone)
[AF] Kalianthys:
2024-08-18, 19:06:40
Merci Seb. Je confirme que c'est réparé.
Sébastien:
2024-08-18, 18:38:37
ça doit fonctionner maintenant.
[AF] Kalianthys:
2024-08-18, 15:50:28
Comment se connecte-t-on sur statseb depuis la migration ? avant c'était automatique.

Recent

NFS@Home

Démarré par Aillas, 05 Septembre 2009 à 09:02

« précédent - suivant »

0 Membres et 1 Invité sur ce sujet

cedricdd

I'm gonna be honest with you, the challenge wasn't even announced here, the team is automatically added by BOINCStat to all the challenge. December is really one of the worst time of the year for us to do challenges, we are more or less all crunching to win the FB http://formula-boinc.org/, if some cores were added to NFS@home it was to win some places on NFS@home for the FB.

Je vais être honnête avec vous, le challenge n'a même pas été annoncé ici, l'équipe est automatiquement ajouté par BOINCStat  sur tous les challenges. Décembre est une des pires période de l'année pour nous pour participer à des challenges, la majorité d'entre nous calcule pour gagner le FB  http://formula-boinc.org/, si des cœurs ont été ajouté à NFS@home c'était pour gagner des places sur NFS@home pour le FB.
Kill all my demons, and my angels might die too.

bernardP

Ben dites donc, il en raconte des trucs, lui. Devait être sevré depuis un moment...
:heink: :marcp:

Carlos Pinho

Citation de: cedricdd le 14 Décembre 2012 à 18:26
I'm gonna be honest with you, the challenge wasn't even announced here, the team is automatically added by BOINCStat to all the challenge. December is really one of the worst time of the year for us to do challenges, we are more or less all crunching to win the FB http://formula-boinc.org/, if some cores were added to NFS@home it was to win some places on NFS@home for the FB.

Je vais être honnête avec vous, le challenge n'a même pas été annoncé ici, l'équipe est automatiquement ajouté par BOINCStat  sur tous les challenges. Décembre est une des pires période de l'année pour nous pour participer à des challenges, la majorité d'entre nous calcule pour gagner le FB  http://formula-boinc.org/, si des cœurs ont été ajouté à NFS@home c'était pour gagner des places sur NFS@home pour le FB.

I'm a member of team SETI.USA and we don't care about http://formula-boinc.org/. I think our position is a consequence of the random crunch we do.

Carlos

cedricdd

Citation de: Carlos Pinho le 14 Décembre 2012 à 18:53
I'm a member of team SETI.USA and we don't care about http://formula-boinc.org/. I think our position is a consequence of the random crunch we do.

Carlos

Well, we care.
Kill all my demons, and my angels might die too.

Carlos Pinho

Citation de: cedricdd le 14 Décembre 2012 à 18:55
Well, we care.

Good luck for the final days. Hope you manage to stay in first place.
Back to the topic, any doubts about NFS@Home please ask and I'll try to answer.

JeromeC

Anyway thanks for all the information, it'll take some time to translate and digest everything :)
A quoi bon prendre la vie au sérieux, puisque de toute façon nous n'en sortirons pas vivants ? (Alphonse Allais)


Carlos Pinho

Citation de: JeromeC le 15 Décembre 2012 à 18:11
Anyway thanks for all the information, it'll take some time to translate and digest everything :)

Any doubt on the translation I can help a bit.
Keep the cores running!

mcroger

Thanks Carlos !

One question though : cannot be large factorization calculations be parallelized for GPGPU ?

This is what is currently done with GPGPU key cracking machines, so why does not NFS work that way ?

Pretty sure that this question has already been raised, but I'd be glad to hear your opinion ! :)

----------------------------------------------

Traduction

Merci Carlos !

Une question par contre: des grands calculs de factorisation ne peuvent-ils être parallélisés en GPU ?
C'est ce qui est actuellement fait avec des machines pour du craquage de clés, alors pourquoi NFS suit-il pas cette voie ?

Je suis assez sûr que cette question a déjà été posée, mais j'aimerais avoir ton opinion à ce propos ? :)

Carlos Pinho

Polynomial stage is already done via GPU for GNFS integers.
For sieving, quadratic sieving, you can find a lot of papers regarding the attempt to run it on a GPU. You can read this master thesis: http://www.cs.bath.ac.uk/~mdv/courses/CM30082/projects.bho/2009-10/Archer-C-dissertation-2009-10.pdf
For lattice sieving, the one used at NFS@Home, I think there's memory issues I can't understand because I don't know CUDA nor OpenCL coding. A thread was started a few years ago in here: http://www.mersenneforum.org/showthread.php?t=12566.

I personally think that the code is so hard that no one manages or have time to port it to GPU application. I will dig more about this issue and post progress here.

Carlos

Carlos Pinho

Last 16e Lattice Sieve application wu received is at ~q=782.000M.
Last 16e Lattice Sieve V5 application wu received is at ~q=959.000M (second chunck restarted at 950M going to 1,000M: meaning going backwards from 1,000M until meet 16e in the middle).
Last 16e Lattice Sieve V5 application already sent all wu's from 1,000M to 1,400M (first chunk).

Overall Q range sieve of 2,1027- started at 20M to 1,400M.

Q range situation is this:
20M-782M (sent through 16e application, remaining wu's close to be done)
782M-950M (unsent)
950M-959M (sent, remaining wu's close to be done)
959M-1000M (unsent, will be sent through 16e V5 application)
1000M-1400M (sent through 16e V5 application, remaining wu's close to be done)

Second part of the challenge will start tomorrow, and about 2,1037- sieve, we still have about ~209k wu's left to be crunched, ~113k already created, ~46k in progress. If people can leave their machines for the Second Part of the challenge would be great because I think with the current rate all 2,1037- wu's will be sent to people to crunch.

Carlos Pinho

Second part of the challenge is underway at http://boincstats.com/en/stats/challenge/team/chat/285.

2,1037- figures:

Last 16e Lattice Sieve application wu received is at ~q=801M.
Last 16e Lattice Sieve V5 application wu received is at ~q=970M (second chunk restarted at 950M going to 1,000M: meaning going backwards from 1,000M until meet 16e in the middle).
Last 16e Lattice Sieve V5 application already sent all wu's from 1,000M to 1,400M (first chunk).

Overall Q range sieve of 2,1037- started at 20M to 1,400M.

Q range situation is this:
20M-801M (sent through 16e application, remaining wu's close to be done)
801M-950M (unsent)
950M-970M (sent, remaining wu's close to be done)
970M-1000M (unsent, will be sent through 16e V5 application)
1000M-1400M (sent through 16e V5 application, remaining wu's close to be done)

In terms of work still to be done we are talking about ~179k wu's left to be crunched, ~82k already created, ~46k in progress.

Considering my machine (core i5 750 with cache set to 200, 180 wu's daily done) the leading edge of the undone wu's to the finish ones is about 11M without taking into consideration the repetitive wu's that probably are in there mixed.

Carlos Pinho

Thank you for the cores, keep them busy!

Carlos Pinho

Carlos Pinho

Last 16e Lattice Sieve application wu received is at ~q=837M.
Last 16e Lattice Sieve V5 application wu received is at ~q=991M (second chunk restarted at 950M going to 1,000M: meaning going backwards from 1,000M until meet 16e in the middle).
Last 16e Lattice Sieve V5 application already sent all wu's from 1,000M to 1,400M (first chunk).

Overall Q range sieve of 2,1037- started at 20M to 1,400M.

Q range situation is this:
20M-837M (sent through 16e application, remaining wu's close to be done)
837M-950M (unsent)
950M-991M (sent, remaining wu's close to be done)
991M-1000M (unsent, will be sent through 16e V5 application)
1000M-1400M (sent through 16e V5 application, remaining wu's close to be done)

Carlos Pinho

Last 16e Lattice Sieve application wu received is at ~q=849M.
Last 16e Lattice Sieve V5 application wu received is at ~q=932M (third chunk restarted at 30M going to 950M: meaning going backwards from 1,000M until meet 16e in the middle).
Last 16e Lattice Sieve V5 application already sent all wu's from 950M to 1,400M.

Overall Q range sieve of 2,1037- started at 20M to 1,400M.

Q range situation is this:
20M-849M (sent through 16e application, remaining wu's close to be done)
849M-930M (unsent)
930M-932M (sent, remaining wu's close to be done)
932M-950M (unsent)
950M-1400M (sent through 16e V5 application, remaining wu's close to be done)

Lot's of aborted wu's being done on the range below 800M.

If you are only running lasieve5f application please consider also checking lasievef application. Linux users will run both, windows users only lasievef.
NFS@Home is hitting a moment where the two q regions sieved separately by 16e and 16 V5 are going to collide so it is better for linux users to check both applications. I already did and also because let's a lot of aborted wu's below 800M to be done. So I am going to run a mix of the two.

Carlos Pinho

Last 16e Lattice Sieve application wu received is at ~q=921M.
16e Lattice Sieve V5 application already sent all wu's from 930M to 1,400M.

Overall Q range sieve of 2,1037- started at 20M to 1,400M.

Q range situation is this:
20M-921M (sent through 16e application, remaining wu's close to be done)
921M-930M (unsent through 16e application)
930M-1400M (sent through 16e V5 application, remaining wu's close to be done)

16e V5 application started another number (2,1049+) from q=1000M because all wu's for 2,1037- have been distributed. Only missing the left overs wu's. Also I only gave order to my CPU to do the 16e Lattice Sieve wu's instead of both 16 and 16e V5 applications respectively.

In conclusion, 2,1037- will be completely sieved by the end of the year.

Carlos Pinho

JeromeC

On peut pas dire qu'on manque d'information sur NFS ! :)

Merci Carlos :jap:
A quoi bon prendre la vie au sérieux, puisque de toute façon nous n'en sortirons pas vivants ? (Alphonse Allais)


Dilandau

L'AF termine encore 2ème pour le challenge #2 de décembre ;)
GPU: 1 * nVidia GTX 1070 8Go
CPU: 3 * Intel Xeon E3-1225 v2 + 1 * Intel Xeon E3-1230 v5

Carlos Pinho

Status of 2,1037-:

20948 wu's left to be crunched by applications 16e and 16 e V5 to finally close the 2,1037- sieve.

Status of 2,1049+:


Last 16e Lattice Sieve application wu received is at ~q=23M (goal to 1000M)
Last 16e Lattice Sieve V5 application wu received is at ~q=1046M (started at 1000M, goal to unknown, then backwards from 1000M until meet 16e in the middle)

Carlos Pinho

Soon I'll post the status of 2,1037-. Post-processing phase started.

About 2,1049+ NFS@Home sieve:

Last 16e Lattice Sieve application wu received is at ~q=129M (goal to 1000M)
Last 16e Lattice Sieve V5 application wu received is at ~q=1095M (goal to unknown, then backwards from 1000M until meet 16e in the middle)

From now on I'll update weekly.

Carlos

Carlos Pinho

For 2,1037-:

The LA has started.

matrix is 67008555 x 67008732 (29495.3 MB) with weight 8405533452 (125.44/col)

linear algebra at 0.1%, ETA 447h56m at 10/01/2013 on 576 cores on Trestles at the San Diego Supercomputing Center.

For 2,1049+:

Last 16e Lattice Sieve application wu received is at ~q=323M (goal to 1000M)
Last 16e Lattice Sieve V5 application wu received is at ~q=1233M (goal to unknown, then backwards from 1000M until meet 16e in the middle)

If you guys can move your clients for a while from 16e Lattice Sieve V5 to 16e Lattice Sieve would be great. We need to crunch more 16e wu's instead of the V5 ones. (message to [AF>Libristes] Dilandau...lol...you're crunching 16e V5 very fast)

Carlos Pinho

Dilandau

Ok ^^ Now each server can crunch 16e + 16e V5 ;)
GPU: 1 * nVidia GTX 1070 8Go
CPU: 3 * Intel Xeon E3-1225 v2 + 1 * Intel Xeon E3-1230 v5

Carlos Pinho

Citation de: Dilandau le 25 Janvier 2013 à 12:39
Ok ^^ Now each server can crunch 16e + 16e V5 ;)

Can you move only to crunch 16e? 16e must go through q=323M to q=1000M. Lot's of wu's to be crunched there , 677 000 wu's.

Thank you.

Carlos Pinho

[AF>Libristes>Jip] Elgrande71

Bravo Dilandau, continue comme ça.  :kookoo: :jap:
Debian - Distribution GNU/Linux de référence
Parabola GNU/Linux - Distribution GNU/Linux Libre
MX Linux
Emmabuntüs

Jabber elgrande71@chapril.org

Carlos Pinho

Soon SETI.USA will be rolling to help.

For 2,1049+:

Last 16e Lattice Sieve application wu received is at ~q=356M (goal to 1000M)
Last 16e Lattice Sieve V5 application wu received is at ~q=1243M (goal to unknown, then backwards from 1000M until meet 16e in the middle)

Carlos Pinho

2,1037- is done. Log file in the attachment.