Abstract

The HELIOS (High Efficiency Light Intensity Observation System) project addresses the issue of excessive brightness from LEO satellite constellations. The system aims to develop a reflectance model of a satellite prior to launch, enabling the validation of compliance with astronomer guidelines, which suggest a minimum magnitude of 7. This ensures that the constellation is not visible to the naked eye and does not excessively interfere with astronomical observations. Additionally, the reflectance model obtained can be used for attitude determination by comparing the light curves with those obtained from observations. This application extends to Space Traffic Management, debris reentry prediction, and active debris removal. The system utilizes a light source consisting of 14 high-power LEDs with different wavelengths that replicate the solar spectrum. The light is positioned on an arc and can slide along a rail, allowing sampling at various angles. The arc itself can rotate on its axis, enabling the placement of the light source at any point on the hemisphere. At the center of the hemisphere, a scaled 3D printed model of the satellite under study is placed, with surface finishes resembling the actual satellite. Creating models with varying levels of detail permits to assess their impact on the reflectance model. Finally, the robotic arm Arm4NDO (Arm For Near Distance Observations) permits the movement of a high-efficiency quantum camera mounted at its end across the entire hemisphere’s surface for data acquisition. The camera records a video, stopping at points of interest for capturing frames. To strike a balance between continuous data acquisition and reasonable testing times, both the light source and capture points are positioned every 15° in elevation and azimuth. The initial test campaign aimed to determine optimal camera settings (exposure, aperture, and ISO) to achieve good model visibility and a dark background. Preliminary tests focused on Starlink’s chassis and the most significant observation configurations. Frames of interest were extracted from the videos, and their luminous flux (sum of pixel intensity values) was derived. With the obtained data, curves normalized relative to the maximum were plotted, allowing for a comparison of flux variations among different configurations.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.