summaryrefslogtreecommitdiff
path: root/spec/ruby/core/proc/shared
diff options
context:
space:
mode:
authorJeremy Evans <[email protected]>2023-04-02 11:06:13 -0700
committerJeremy Evans <[email protected]>2023-04-25 08:06:16 -0700
commit583e9d24d419023bc1123190768297a468113613 (patch)
treec585901a2b7fef9726398d6175a1c5e00eb4eee7 /spec/ruby/core/proc/shared
parent9b4bf02aa89fa9a9a568b7be045ab1df8053f0e6 (diff)
Optimize symproc calls
Similar to the bmethod/send optimization, this avoids using CALLER_ARG_SPLAT if not necessary. As long as the receiver argument can be shifted off, other arguments are passed through as-is. This optimizes the following types of calls: * symproc.(recv) ~5% * symproc.(recv, *args) ~65% for args.length == 200 * symproc.(recv, *args, **kw) ~45% for args.length == 200 * symproc.(recv, **kw) ~30% * symproc.(recv, kw: 1) ~100% Note that empty argument splats do get slower with this approach, by about 2-3%. This is probably because iseq argument setup is slower for empty argument splats than CALLER_SETUP_ARG is. Other than non-empty argument splats, other argument splats are faster, with the speedup depending on the number of arguments. The following types of calls are not optimized: * symproc.(*args) * symproc.(*args, **kw) This is because the you cannot shift the receiver argument off without first splatting the arg.
Notes
Notes: Merged: https://github.com/ruby/ruby/pull/7522
Diffstat (limited to 'spec/ruby/core/proc/shared')
0 files changed, 0 insertions, 0 deletions