-
-
Notifications
You must be signed in to change notification settings - Fork 31.4k
[help wanted] deps: update V8 to 13.6 #57753
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Review requested:
|
@joyeecheung (or maybe someone else from @nodejs/cpp-reviewers) Can you help finalizing 05eab64 ?
|
e1ed931
to
7774826
Compare
I will run a build on illumos/SmartOS. If you fixed the |
I used some brute-force to clear the high-16-bits in the userland address space so v8 can do its thing. It stopped failing where it used to and started failing somewhere else, but still in In non-debug/stock:
In debug (using I noticed on #57114 that @bnoordhuis had a suggestion too that might ease the requirement for me to do something drastic (at least in the short term?). If v8 documents "pointer requirements" somewhere, that'd help me a lot. I'm visiting the v8-dev google group now, so they may give me the clues I need. EDIT/UPDATE ==> I didn't look carefully before at @targos 's update ==> I'm encountering the same issue. |
#57753 (comment) basically requires a revert of #30909. |
/cc @addaleax |
Now it's failing embedtest:
|
@nodejs/platform-ppc @nodejs/platform-s390 V8 gn build fails: https://ci.nodejs.org/job/node-test-commit-v8-linux/6492/nodes=rhel8-ppc64le,v8test=v8test/console |
@targos try updating |
@richardlau are you able to do that? |
Yes, I'll look at it after the TSC meeting. |
illumos (on SmartOS) update: I can build this branch on the hacked-userlimit illumos system. I've yet to try it on the stock-userlimit ( |
Updated in nodejs/build#4066. https://ci.nodejs.org/job/node-test-commit-v8-linux/6493/ ✅ is with the updated |
There's another issue with macos: https://ci.nodejs.org/job/node-test-commit-osx/64642/nodes=osx13-x64/testReport/junit/(root)/parallel/test_worker_memory/
|
a213815 compiled successfully. I'll try to open a V8 CL Edit: https://chromium-review.googlesource.com/c/v8/v8/+/6469692 |
We can see on https://ci.nodejs.org/job/node-test-binary-armv7l/16772/ that the brotli issue is still here. |
I see that the following test became flaky: tools/test.py test/parallel/test-fs-stat-bigint.js --repeat=100
|
I'm curious about how to proceed with the VA48/full-4-level fixes for illumos? I could probably shrink it down to illumos-only, but that might not help AIX with BIG VA spaces, never mind any other 64-bit architectures that might grow into the top-16-bits of VA space that might kneecap V8. (There STILL may be some V8 abuses of the top-16 I've missed, but the ones I have fixes for are immediately helpful to your overarching import-V8 changes here.) Also, if/when illumos brings in something akin to Oracle Solaris's mapfile (i.e. |
@danmcd If https://chromium-review.googlesource.com/c/v8/v8/+/6320599 is accepted by the V8 team, the best would be to submit a follow-up patch to them for illumos |
In the mean time, would it be possible to apply the workaround you found to our CI hosts? How can we do it? |
For this armv7l error: https://ci.nodejs.org/job/node-test-commit-arm/58141/nodes=ubuntu2204-armv7l/ Reproduction on the CI host is:
|
This can be "fixed" by reverting https://chromium-review.googlesource.com/c/v8/v8/+/6297948. Should we do that and come back to it for the 13.7 update? Note that the flag no longer exists in V8 13.7 so we won't be able to disable it anymore. |
/cc @omerktz FYI. |
I can reproduce locally, but it's rare (2% failures) By looking at the code in https://github.com/nodejs/node/blob/main/src/node_platform.cc, it seems possible to have |
@joyeecheung is it related to this?
|
I'll see if it's as simple as scribbling a value into the kernel tunable. I'll ping here when I've done it so a build can spin. You'll have to use the MNX-hosted agents, however. Unfortunately, this isn't something you can just tell Node users to Just Apply. I don't know how much pressure we can put on V8. There's also (with illumos) a fact that if you're illumos is past April, 2022, you don't need the |
I'd have to revisit the whole of how the current MNX agents work. That work was done by Brie Bennett, who is no longer associated with Triton, SmartOS, or illumos.
I've followed up on the IBM-contributed Gerrit CR. We'll see if they pay attention or not. I'm happy to supply something for Node directly that can be backed-out once V8 gets their act together. |
FYI, an illumos-only commit (which if not compiled for illumos doesn't affect any non-illumos code, so no precursor IBM one): |
(I cannot reproduce the flake locally before the patches, so cannot verify locally whether it actually makes it go away..) Started a stress test with https://github.com/joyeecheung/node/tree/isolate-free : https://ci.nodejs.org/job/node-stress-single-test/562/ ✅ |
Original commit message: [api] add Isolate::Free() and IsolateDisposeFlags::kDontFree This allows embedders to mirror the isolate disposal routine with an initialization routine that uses Isolate::Allocate(). ``` v8::Isolate* isolate = v8::Isolate::Allocate(); // Use the isolate address as a key. v8::Isolate::Initialize(isolate, params); isolate->Dispose(v8::Isolate::IsolateDisposeFlags::kDontFree); // Remove the entry keyed by isolate address. v8::Isolate::Free(isolate); ``` Previously, the only way to dispose the isolate bundles the de-initialization and the freeing of the address together in v8::Isolate::Dispose(). This is inadequate for embedders like Node.js that uses the isolate address as a key to manage the task runner associated with it, if another thread gets an isolate allocated at the aligned address before the other thread finishes cleanup for the isolate previously allocated at the same address, and locking on the entire disposal can be too risky since it may post GC tasks that in turn requires using the isolate address to locate the task runner. It's a lot simpler to handle the issue if the disposal process of the isolate can mirror the initialization of it and split into two routines. Refs: nodejs#57753 (comment) Refs: nodejs#30850 Refs: v8/v8@f4107cf
Original commit message: [api] add Isolate::Deinitialize() and Isolate::Free() This allows embedders to mirror the isolate disposal routine with an initialization routine that uses Isolate::Allocate(). ``` v8::Isolate* isolate = v8::Isolate::Allocate(); // Use the isolate address as a key. v8::Isolate::Initialize(isolate, params); isolate->Deinitialize(); // Remove the entry keyed by isolate address. v8::Isolate::Free(isolate); ``` Previously, the only way to dispose the isolate bundles the de-initialization and the freeing of the address together in v8::Isolate::Dispose(). This is inadequate for embedders like Node.js that uses the isolate address as a key to manage the task runner associated with it, if another thread gets an isolate allocated at the aligned address before the other thread finishes cleanup for the isolate previously allocated at the same address, and locking on the entire disposal can be too risky since it may post GC tasks that in turn requires using the isolate address to locate the task runner. It's a lot simpler to handle the issue if the disposal process of the isolate can mirror the initialization of it and split into two routines. Refs: nodejs#57753 (comment) Refs: nodejs#30850 Refs: v8/v8@f4107cf
Upstreaming the V8 API needed to make the flake go away...https://chromium-review.googlesource.com/c/v8/v8/+/6480071 |
A MUCH smoother V8 patch for illumos-only fixes (i.e. use the full VA48 address space) is here: |
I hope it can replace #57114
Notable changes:
@nodejs/v8-update @nodejs/tsc