The development of large-scale integration of optoelectronic neuromorphic devices with ultralow power consumption and broadband responses is essential for high-performance bionics vision systems. In this work, we developed a strategy to construct large-scale (40 × 30) enhancement-mode carbon nanotube optoelectronic synaptic transistors with ultralow power consumption (33.9 aJ per pulse) and broadband responses (from 365 to 620 nm) using low-work function yttrium (Y)-gate electrodes and the mixture of eco-friendly photosensitive Ag2S quantum dots (QDs) and ionic liquids (ILs)-cross-linking-poly(4-vinylphenol) (PVP) (ILs-c-PVP) as the dielectric layers. Solution-processable carbon nanotube thin-film transistors (TFTs) showed enhancement-mode characteristics with the wide and controllable threshold voltage window (-1 V∼0 V) owing to use of the low-work-function Y-gate electrodes. It is noted that carbon nanotube optoelectronic synaptic transistors exhibited high on/off ratios (>106), small hysteresis and low operating voltage (≤2 V), and enhancement mode even under the illumination of ultraviolet (UV, 365 nm), blue (450 nm), and green (550 nm) to red (620 nm) pulse lights when introducing eco-friendly Ag2S QDs in dielectric layers, demonstrating that they have the strong fault-tolerant ability for the threshold voltage drifts caused by various manufacturing scenarios. Furthermore, some important bionic functions including a high paired pulse facilitation index (PPF index, up to 290%), learning and memory function with the long duration (200 s), and rapid recovery (2 s). Pavlov's dog experiment (retention time up to 20 min) and visual memory forgetting experiments (the duration of high current for 180 s) are also demonstrated. Significantly, the optoelectronic synaptic transistors can be used to simulate the adaptive process of vision in varying light conditions, and we demonstrated the dynamic transition of light adaptation to dark adaptation based on light-induced conditional behavior. This work undoubtedly provides valuable insights for the future development of artificial vision systems.