Sort by
Effectiveness of Using Additional HIV Self-Test Kits as an Incentive to Increase HIV Testing Within Assisted Partner Services.

Incentives have shown mixed results in increasing HIV testing rates in low-resource settings. We investigated the effectiveness of offering additional self-tests (HIVSTs) as an incentive to increase testing among partners receiving assisted partner services (APS). Western Kenya. We conducted a single-crossover study nested within a cluster-randomized controlled trial. Twenty-four facilities were randomized 1:1 to (1) control: provider-delivered testing or (2) intervention: offered 1 HIVST or provider-delivered testing for 6 months (pre-implementation), then switched to offering 2 HIVSTs for 6 months (post-implementation). A difference-in-differences approach using generalized linear mixed models, accounting for facility clustering and adjusting for age, sex, and income, was used to estimate the effect of the incentive on HIV testing and first-time testing among partners in APS. March 2021-June 2022, 1127 index clients received APS and named 8155 partners, among whom 2333 reported a prior HIV diagnosis and were excluded from analyses, resulting in 5822 remaining partners: 3646 (62.6%) and 2176 (37.4%) in the pre-implementation and post-implementation periods, respectively. Overall, 944/2176 partners (43%) were offered a second HIVST during post-preimplementation, of whom 34.3% picked up 2 kits, of whom 71.7% reported that the second kit encouraged HIV testing. Comparing partners offered 1 vs. two HIVSTs showed no difference in HIV testing (relative risk: 1.01, 95% confidence interval: 0.951 to 1.07) or HIV testing for the first time (relative risk: 1.23, 95% confidence interval: 0.671 to 2.24). Offering a second HIVST as an incentive within APS did not significantly impact HIV testing or first-time testing, although those opting for 2 kits reported it incentivized them to test.

Just Published
Relevant
De novo variants in ATXN7L3 lead to developmental delay, hypotonia and distinctive facial features.

Deubiquitination is crucial for the proper functioning of numerous biological pathways, such as DNA repair, cell cycle progression, transcription, signal transduction and autophagy. Accordingly, pathogenic variants in deubiquitinating enzymes (DUBs) have been implicated in neurodevelopmental disorders and congenital abnormalities. ATXN7L3 is a component of the DUB module of the Spt-Ada-Gcn5 acetyltransferase (SAGA) complex and two other related DUB modules, and it serves as an obligate adaptor protein of three ubiquitin-specific proteases (USP22, USP27X or USP51). Through exome sequencing and by using GeneMatcher, we identified nine individuals with heterozygous variants in ATXN7L3. The core phenotype included global motor and language developmental delay, hypotonia and distinctive facial characteristics, including hypertelorism, epicanthal folds, blepharoptosis, a small nose and mouth, and low-set, posteriorly rotated ears. To assess pathogenicity, we investigated the effects of a recurrent nonsense variant [c.340C>T; p.(Arg114Ter)] in fibroblasts of an affected individual. ATXN7L3 protein levels were reduced, and deubiquitylation was impaired, as indicated by an increase in histone H2Bub1 levels. This is consistent with the previous observation of increased H2Bub1 levels in Atxn7l3-null mouse embryos, which have developmental delay and embryonic lethality. In conclusion, we present clinical information and biochemical characterization supporting ATXN7L3 variants in the pathogenesis of a rare syndromic neurodevelopmental disorder.

Relevant
Neural Network Layer Algebra: A Framework to Measure Capacity and Compression in Deep Learning.

We present a new framework to measure the intrinsic properties of (deep) neural networks. While we focus on convolutional networks, our framework can be extrapolated to any network architecture. In particular, we evaluate two network properties, namely, capacity, which is related to expressivity, and compression, which is related to learnability. Both these properties depend only on the network structure and are independent of the network parameters. To this end, we propose two metrics: the first one, called layer complexity, captures the architectural complexity of any network layer; and, the second one, called layer intrinsic power, encodes how data are compressed along the network. The metrics are based on the concept of layer algebra, which is also introduced in this article. This concept is based on the idea that the global properties depend on the network topology, and the leaf nodes of any neural network can be approximated using local transfer functions, thereby allowing a simple computation of the global metrics. We show that our global complexity metric can be calculated and represented more conveniently than the widely used Vapnik-Chervonenkis (VC) dimension. We also compare the properties of various state-of-the-art architectures using our metrics and use the properties to analyze their accuracy on benchmark image classification datasets.

Open Access
Relevant