To test the potential disruptive effect of Artificial Intelligence (AI) transformers (e.g., ChatGPT) and their associated Large Language Models on the time allocation process, both in proposal reviewing and grading, an experiment has been set-up at ESO for the P112 Call for Proposals. The experiment aims at raising awareness in the ESO community and build valuable knowledge by identifying what future steps ESO and other observatories might need to take to stay up to date with current technologies. We present here the results of the experiment, which may further be used to inform decision-makers regarding the use of AI in the proposal review process. We find that the ChatGPT-adjusted proposals tend to receive lower grades compared to the original proposals. Moreover, ChatGPT 3.5 can generally not be trusted in providing correct scientific references, while the most recent version makes a better, but far from perfect, job. We also studied how ChatGPT deals with assessing proposals. It does an apparent remarkable job at providing a summary of ESO proposals, although it doesn’t do so good to identify weaknesses. When looking at how it evaluates proposals, however, it appears that ChatGPT systematically gives a higher mark than humans, and tends to prefer proposals written by itself.
During the last years, ESO has undertaken the re-definition of all its front-end interfaces, from the preparation and submission of observing proposals up to their final scientific review by the ESO Observing Programmes Committee (OPC). Because of its overall complexity, ESO decided to go for a staged approach, which on one hand allowed us to offer the new proposals interface as soon as it was available, but on the other hand it has been challenging us in the operational handling of two simultaneous systems, the old and the new one. In 2019, we successfully deployed the new, web-based, Phase1 user interface for proposals preparation and submission (p1ui) and in 2021 we were able to offer for the first time a web-based, proposals grading system, that supported the proposals review and Expert Panel discussions, albeit online due to the COVID19 pandemic. In this presentation, we describe the overall project, focusing on the successful deployments of the new Phase 1 User Interface (for proposals submission), of the Proposal Evaluation Interface (for the proposals review process) and their subsequent improvements, made possible by closing the loop very effectively with our users’ community. We will then conclude by presenting ESO future plans in the Phase1 area, including new proposal submission channels and new review schemes.
Monitoring and prediction of astronomical observing conditions are essential for planning and optimizing observations. For this purpose, ESO, in the 90s, developed the concept of an Astronomical Site Monitor (ASM), as a facility fully integrated in the operations of the VLT observatory[1]. Identical systems were installed at Paranal and La Silla, providing comprehensive local weather information. By now, we had very good reasons for a major upgrade:
• The need of introducing new features to satisfy the requirements of observing with the Adaptive Optics Facility and to benefit other Adaptive Optics systems.
• Managing hardware and software obsolescence.
• Making the system more maintainable and expandable by integrating off-the-shelf hardware solutions.
The new ASM integrates:
• A new Differential Image Motion Monitor (DIMM) paired with a Multi Aperture Scintillation Sensor (MASS) to measure the vertical distribution of turbulence in the high atmosphere and its characteristic velocity.
• A new SLOpe Detection And Ranging (SLODAR) telescope, for measuring the altitude and intensity of turbulent layers in the low atmosphere.
• A water vapour radiometer to monitor the water vapour content of the atmosphere.
• The old weather tower, which is being refurbished with new sensors. The telescopes and the devices integrated are commercial products and we have used as much as possible the control system from the vendors. The existing external interfaces, based on the VLT standards, have been maintained for full backward compatibility. All data produced by the system are directly fed in real time into a relational database. A completely new web-based display replaces the obsolete plots based on HP-UX RTAP. We analyse here the architectural and technological choices and discuss the motivations and trade-offs.
Throughout the course of many years of observations at the VLT, the phase 2 software applications supporting the
specification, execution and reporting of observations have been continuously improved and refined. Specifically the
introduction of astronomical surveys propelled the creation of new tools to express more sophisticated, longer-term
observing strategies often consisting of several hundreds of observations. During the execution phase, such survey
programs compete with other service and visitor mode observations and a number of constraints have to be considered.
In order to maximize telescope utilization and execute all programs in a fair way, new algorithms have been developed to
prioritize observable OBs taking into account both current and future constraints (e.g. OB time constraints, technical
telescope time) and suggest the next OB to be executed. As a side effect, a higher degree of observation automation
enables operators to run telescopes mostly autonomously with little supervision by a support astronomer. We describe
the new tools that have been deployed and the iterative and incremental software development process applied to
develop them. We present our key software technologies used so far and discuss potential future evolution both in terms
of features as well as software technologies.
The start of operations of the VISTA survey telescope will not only offer a new facility to the ESO community, but also
a new way of observing. Survey observation programs typically observe large areas of the sky and might span several
years, corresponding to the execution of hundreds of observations blocks (OBs) in service mode. However, the execution
time of an individual survey OB will often be rather short. We expect that up to twelve OBs may be executed per hour,
as opposed to about one OB per hour on ESO's Very Large Telescope (VLT). OBs of different programs are competing
for observation time and must be executed with adequate priority. For these reasons, the scheduling of survey OBs is
required to be almost fully automated. Two new key concepts are introduced to address these challenges: ESO's phase 2
proposal preparation tool P2PP allows PIs of survey programs to express advanced mid-term observing strategies using
scheduling containers of OBs (groups, timelinks, concatenations). Telescope operators are provided with effective short-term
decision support based on ranking observable OBs. The ranking takes into account both empirical probability
distributions of various constraints and the observing strategy described by the scheduling containers. We introduce the
three scheduling container types and describe how survey OBs are ranked. We demonstrate how the new concepts are
implemented in the preparation and observing tools and give an overview of the end-to-end workflow.
ESO introduced a User Portal for its scientific services in November 2007. Registered users have a central entry point for
the Observatory's offerings, the extent of which depends on the users' roles - see [1]. The project faced and overcame a
number of challenging hurdles between inception and deployment, and ESO learned a number of useful lessons along the
way. The most significant challenges were not only technical in nature; organization and coordination issues took a
significant toll as well. We also indicate the project's roadmap for the future.
All ESO Science Operations teams operate on Observing Runs, loosely defined as blocks of observing time on a specific instrument. Observing Runs are submitted as part of an Observing Proposal and executed in Service or Visitor Mode. As an Observing Run progresses through its life-cycle, more and more information gets associated to it: Referee reports, feasibility and technical evaluations, constraints, pre-observation data, science and calibration frames, etc. The Manager of Observing Runs project (Moor) will develop a system to collect operational information in a database, offer integrated access to information stored in several independent databases, and allow HTML-based navigation over the whole information set. Some Moor services are also offered as extensions to, or complemented by, existing desktop applications.
The VLT Data Flow System (DFS) has been developed to maximize the scientific output from the operation of the ESO observatory facilities. From its original conception in the mid 90s till the system now in production at Paranal, at La Silla, at the ESO HQ and externally at home institutes of astronomers, extensive efforts, iteration and retrofitting have been invested in the DFS to maintain a good level of performance and to keep it up to date. In the end what has been obtained is a robust, efficient and reliable 'science support engine', without which it would be difficult, if not impossible, to operate the VLT in a manner as efficient and with such great success as is the case today. Of course, in the end the symbiosis between the VLT Control System (VCS) and the DFS plus the hard work of dedicated development and operational staff, is what made the success of the VLT possible. Although the basic framework of DFS can be considered as 'completed' and that DFS has been in operation for approximately 3 years by now, the implementation of improvements and enhancements is an ongoing process mostly due to the appearance of new requirements. This article describes the origin of such new requirements towards DFS and discusses the challenges that have been faced adapting the DFS to an ever-changing operational environment. Examples of recent, new concepts designed and implemented to make the base part of DFS more generic and flexible are given. Also the general adaptation of the DFS at system level to reduce maintenance costs and increase robustness and reliability and to some extend to keep it conform with industry standards is mentioned. Finally the general infrastructure needed to cope with a changing system is discussed in depth.
The operational applications needed to quantitatively assess VLT calibration and science data are provided by the VLT Quality Control system (QC). In the Data Flow observation life-cycle, QC relates data pipeline processing and observation preparation. It allows the ESO Quality Control Scientists of the Data Flow Operations group to populate and maintain the pipeline calibration database, to measure and verify the quality of observations, and to follow instrument trends. The QC system also includes models allowing users to predict instrument performance, and the Exposure Time Calculators are probably the QC applications most visible to the astronomical community. The Quality Control system is designed to cope with the large data volumes of the VLT, the geographical distribution of data handling, and the parallelism of observations executed on the different unit telescopes and instruments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.