Conversation
|
Ok: this PR is doing score recalculation in 2m 49s vs. 27m 8s on |
|
The numbers look really promising! I've given the crate only a cursory read through - it seems like if the goal is to use this for Servo's WPT dashboard, the pending items are
|
72629e3 to
993ecde
Compare
993ecde to
c8cc211
Compare
f65875a to
7de51d1
Compare
Some up to date numbers: this PR is now calculating in 4m 18s vs 35m 47s for the existing JS version.
This is now implemented. Unfortunately it is a little slower, but it's not too bad.
This is implemented as an array of areas to support. Any exact prefix of segment (separated by a
I have generating identical output to #67 modulo float precision (in the last 2 decimal places of numbers with ~8 decimal places - so I expect this is fine?). This PR is generating a |
37f06e3 to
f0fbfd7
Compare
5245cc0 to
78f9480
Compare
78f9480 to
4d7b747
Compare
Signed-off-by: Nico Burns <nico@nicoburns.com>
4d7b747 to
588b15a
Compare
This is getting more important over time. The timings are now 5m (this PR) vs 50m (main). Will try to finish this up soon. |
Actual score calculation is implemented in the wptreport crate:
This PR adds a CI job to run the CLI from the crate on Github CI here with the purpose of seeing how fast the code runs on GIthub Actions.
Todo:
scores-last-run.json(or remove)(current implementation uses a Rust struct to store "run_info" which leads to the key sort order changing - we should probably use an
IndexMap?)