Skip to content
Mobrief
Research

Academic or research source. Check the methodology, sample size, and whether it's been replicated.

Fix flaky DTensor sharding prop cache logging test

test_sharding_prop_cache_logging was flaky because the C++ DTensor dispatch path caches whether debug logging is enabled in a thread_local bool, initialized once on first dispatch. When another test...

2-Minute Brief
  • According to PyTorch Releases: test_sharding_prop_cache_logging was flaky because the C++ DTensor dispatch path caches whether debug logging is enabled in a thread_local bool, initialized once on first dispatch. When another test triggered DTensor dispatch first, the cached flag stayed false and C++ HIT logs were silently dropped, causing the assertion on exact log output to fail. This was introduced in #173775 which added cache hit/miss logging to the C++ fast path. The logging enabled check was cached for performance but ne
Read Original

Fix flaky DTensor sharding prop cache logging test

TLDR

test_sharding_prop_cache_logging was flaky because the C++ DTensor dispatch path caches whether debug logging is enabled in a thread_local bool, initialized once on first dispatch. When another test...

Artifacts
Code
2-Minute Brief
  • According to PyTorch Releases: test_sharding_prop_cache_logging was flaky because the C++ DTensor dispatch path caches whether debug logging is enabled in a thread_local bool, initialized once on first dispatch. When another test triggered DTensor dispatch first, the cached flag stayed false and C++ HIT logs were silently dropped, causing the assertion on exact log output to fail. This was introduced in #173775 which added cache hit/miss logging to the C++ fast path. The logging enabled check was cached for performance but ne
Open
O open S save B back M mode