|
|
1.INTRODUCTIONThe capabilities of nanosatellites have been growing rapidly since the first CubeSats were launched in 2000. Over one thousand launches later,1,2 CubeSats have proven themselves to be an extremely successful satellite platform. Their popularity is driven by their compact, standardised form factor, which is based on units of 10 cm × 10 cm × 10 cm (i.e. 1U), weighing ~1 kg/U.3 Larger CubeSats can be built measuring in multiples of 1U, typically up to 6U. Their small-scale, standardised design opens the possibility of using Commercial Off-The-Shelf (COTS) components and reducing typical barriers to entry, such as the high costs and lengthy development times associated with conventional space missions. Until recently, CubeSats have been viewed primarily as tools for education, technology demonstration, COTS qualification, communications and Earth observation.4 However, their capabilities for certain scientific applications in astrophysics and planetary science are now becoming more widely considered,5,6 with major space agencies implementing CubeSats as elements within their programmes. For example, in 2018, as part of NASA’s InSight Mars lander mission, two 6U CubeSats were used to provide a continuous communication link to the lander during its entry and descent to the Martian surface.7 As a result of these capabilities, as well as their accessibility and growing popularity, more nations are using CubeSats as a method of increasing their profile within the space sector.6 However, a factor still impeding the capabilities of CubeSats being more widely utilised is their failure rate; In 2018, ~25% of CubeSats launched failed to meet their primary objectives due to early loss of the mission.4 While this figure has significantly improved in the last two decades (the failure rate was 50% in 20084), the development methodology around CubeSats, where less resources are invested to allow for increased accessibility, naturally leads to higher levels of risk and lower levels of reliability (i.e. likelihood of mission success). As CubeSat projects are often developed by university teams, the limited time, resources and experience available in this setting also likely impacts mission success rates.8 Overcoming this issue for the Educational Irish Research Satellite, known as EIRSAT-1, is a key driver behind this work. EIRSAT-1 is a 2U CubeSat (Fig. 1) being developed by a student-led team at University College Dublin (UCD),9 and is set to be Ireland’s first satellite. The design, build, test, and launch of the satellite are supported by the Education Office of the European Space Agency (ESA), under the 2nd round of the Fly Your Satellite! (FYS!) programme. As part of this opportunity, teams obtain access to state-of-the-art test facilities, receive guidance from ESA experts and, if certain milestones are achieved, are awarded a launch opportunity. EIRSAT-1 was one-of-six university-class missions selected in May 2017 to participate in this round of the FYS! programme, following the launch of 3 CubeSats in 2016 as part of round 1.10 As the first Irish satellite, the primary objectives of the EIRSAT-1 project are educational and focus on improving the capabilities of the national higher education sector in space science and engineering. In addition to educational aims, the mission will also perform the first in-flight test of three experiments. The ENBIO Module (EMOD) is a materials science payload hosting a thermal control experiment,11,12 Wave Based Control (WBC) is a software-based payload testing a novel attitude control algorithm13 and the Gamma-ray Module (GMOD) is a miniaturised 7-ray detector14 that will observe high-energy radiation from the most luminous electromagnetic explosions in the Universe, known as Gamma-ray Bursts15 (GRBs). The in-orbit demonstration of and science performed with these payloads aims to further highlight the capabilities of CubeSats, and potential of CubeSat constellations, for conducting scientific research. To improve the likelihood of achieving these aims, the EIRSAT-1 team are developing robust testing techniques by assessing and building on the testing commonly used by other CubeSat teams to validate that the full satellite system, including the hardware and software of the space- and ground-segments, can perform the mission prior to launch. These testing techniques aim to mitigate the high levels of risk typically associated with CubeSats, as well as the risks associated in particular with aspects of the EIRSAT-1 project (i.e. student-led, no in-house experience from previous missions, multiple novel and complex payloads). This paper will present an overview of the testing performed by CubeSat teams, with a focus on Mission Testing. In Mission Tests, the in-flight operations, including nominal scenarios and contingency procedures for non-nominal scenarios, are simulated in a mission representative manner, and evaluated. As part of this discussion, key reasons for the different testing decisions taken by CubeSat teams will be considered. The apparent impact of these decisions on the on-orbit experience will also be reviewed. This work will draw on information gathered from literature, as well as from a survey disseminated to CubeSat teams with launch and on-orbit experience. The main details of a Mission Test plan that has been developed by the EIRSAT-1 team in light of this information will be presented. Future plans to assess the findings of this study following launch (in late ~2021/22) and operation of EIRSAT-1 will be identified. The aim of this work is to provide a resource for CubeSat teams, to assist the development of their Mission Testing and to improve the predicted reliability of CubeSats. 2.CUBESAT TESTINGTesting is a key and intuitive way to ensure that a system is capable of achieving the mission objectives. This can also be referred to as Verification & Validation (V&V), where testing is used as a means to verify that the mission requirements (a formal set of requirements set out at the onset of a mission that define the criteria for mission success) can be satisfied and to validate that the whole system can perform the intended mission, at the core of which is the requirements.16 The International Organisation for Standardisation (ISO) states that ‘a certain set of tests is necessary to ensure the mission success of small spacecraft. Applying the same test requirements and methods as those applied to traditional large/medium satellites, however, will nullify the low-cost and fastdelivery advantages possessed by small spacecraft’. The standard ISO 19683:2017(E),17 Design qualification and acceptance tests of small spacecraft and units, was put in place specifically to address the reliability of small spacecraft through testing, to curb their failure rate while maintaining their appeal as alternatives to conventional space missions. This approach is adopted by many CubeSat teams (e.g. Endurosat18) and programmes (e.g. ESA’s FYS!), where spacecraft testing and testing standards are followed but tailored for CubeSat purposes. Similar to larger satellite projects, initial testing occurs during the development phase of a CubeSat. This can include unit-level software tests, where individual components of source code are tested, through to integration and hardware-in-the-loop tests. More formal testing and a significant part of the V&V process however then occurs at full system level, where all satellite parts are included in the configuration. This typically occurs first at a table-top ‘FlatSat’ level (demonstrated in Fig. 2) with some V&V of the system prior to ‘stack’ (Fig. 2) integration and testing.19 A range of tests can then be carried out on the full system as part of Functional Testing, where different, specific functions of the satellite and its subsystems (e.g. battery charging/discharging, RF transmission/reception, on-board data storage, etc.) are tested and then Mission Testing, where the performance of the full system is tested in a mission simulation. Additional testing performed by CubeSat teams includes vibration and thermal-vacuum (TVAC) testing, where parts of and/or the full satellite are subject to some of the physical extremes experienced during spaceflight.19,20 All of the above-mentioned tests are important for the V&V process. In particular, Functional Testing is often used by teams as the main method for verifying the bulk of the requirements. As Functional Testing can be divided into parts, it is also commonly repeated, at least in part, throughout the development and testing of a CubeSat. However, only Mission Testing requires that flight representative conditions are simulated during the test. This aspect of Mission Testing has the potential to yield a surprising number of mission critical issues which other testing easily misses, and additionally offers a number of benefits beyond V&V of requirements, such as training of the Mission Control Team where some of the complexities of on-orbit operations are experienced through the simulation. Therefore, Sec. 3 of this work discusses Mission Testing for CubeSat projects in more depth to assess its impacts on mission reliability. 3.MISSION TESTINGFrom the European Cooperation for Space Standardisation (ECSS), ECSS-E-ST-10-03C,21 Space engineering: testing, a Mission Test should:
Expanding on the V&V carried out via Functional Testing, Mission Testing primarily aims to validate the mission and to address potential issues related to the actual on-orbit operation of the satellite, such as issues related to the order and duration of operations. Testing with these aims is sometimes referred to as the ‘test as you fly’ approach. Another term with a similar definition that is commonly found in literature is ‘Day in the Life’ (DITL) testing. However, as the name suggests and in contrast to the ECSS definition where the ‘entire mission profile’ should be simulated, in some definitions of DITL (e.g. in Ref. 22), the primary aim is to demonstrate to launch authorities that the mission will meet requirements around deployables, timers, power inhibits, etc. during the initial stages (i.e. the initial ~24 hours) of the mission. A less frequently implemented approach is ‘Week in the Life’ (WITL) testing23 which is closer to the ECSS definition of Mission Testing. The high failure rate of CubeSats is a consequence of so-called Dead On Arrival (DOA) cases, where ~20%2,24 of CubeSats are never contacted following launch, and early mission loss. Early mission loss contributes to the high failure rate in particular over the first ~100 days of operations, during which the predicted reliability of a CubeSat (calculated from data on mission failures over time) drops by almost ~20%.24 The need for more testing is a common conclusion of studies assessing CubeSat reliability.8,23,24 While Functional Testing is of obvious importance to improving reliability, by ensuring that the satellite can at a minimum function as required for mission success, Mission Testing is also considered by some (e.g. Ref. 8) to be an essential step, as it allows issues related to the early operational phase of a satellite (i.e. during those first ~100 days) to be uncovered and addressed prior to launch. However, a search of the available literature on CubeSat testing does not reflect this perspective, with far fewer references to Mission Testing compared to other CubeSat testing techniques. Indeed, some adaptations of satellite testing standards for CubeSats, such as ESA’s Tailored ECSS Engineering Standards for In-Orbit Demonstration CubeSat Projects,25 actually exclude Mission Testing as a requirement, instead suggesting that some testing e.g. of ‘operational mode transitions and safe mode recovery’ scenarios should be covered through Functional Tests. Such an approach relates to the low-cost, fast-delivery advantages of CubeSats, as Ref. 8 states: ‘it is one thing to identify [Mission Tests] as essential; it is another to carve out schedule and budget to carry out such tests’. These considerations lead to the main question to be addressed by this study viz. Could a lack of Mission Testing be a factor that significantly impacts the reliability of CubeSat missions?. Reviewing the literature in which CubeSat Mission Testing26! is discussed, one notable feature is the short time-frame defined for the test (i.e. a duration of 8 hours26 is short when considering a Mission Test that should cover simulated situations occurring over the entire mission profile). However, Ref. 26 do state that their tests included both nominal and non-nominal situations that went beyond the first day of operations, which is in keeping with the ECSS Mission Test definition even though a highly accelerated mission life-cycle must have been simulated to suit the test duration. The information provided by Ref. 26 raises questions about the norms, if any, used by CubeSat teams when designing their Mission Tests, such as what test durations are typically used?, are tests repeated?, and how representative do the tests aim to be?. Building on the first question posed above in this section (i.e. could lack of Mission Testing be impacting the reliability of CubeSats), this study further considers that, if Mission Testing is performed by at least some CubeSat teams, what does this testing entail, where the aim is to address what characteristics of Mission Testing are considered key by CubeSat Teams to improving mission reliability? While only limited literature on Mission Testing exists, it does not necessarily mean that such testing is not being carried out by many CubeSat teams. As this testing can be quite mission-specific, some teams may not see the need to publish details. Additionally, writing up publications requires additional effort and resources that may not be available within a team. Therefore, Sec. 3.1 of this work presents a survey, disseminated to CubeSat teams with launched missions, that was developed to better understand the extent to which Mission Testing is performed by CubeSat teams, and to assess whether the testing decisions made have an impact on mission reliability. By focusing in particular on the potential importance of Mission Testing, this work builds on other survey-style studies that collect information from those with on-orbit experience to investigate the leading reasons for CubeSat failures and improve their reliability. For example, Ref. 23 interviewed teams to build a set of recommendations for new projects, Ref. 24 have combined publicly available data with survey results to assess known and believed reasons for on-orbit failures as well as the time-dependence of these failures, and Ref. 27 have used surveys to identify design decisions that impact the likelihood of mission success. 3.1Mission Testing SurveyThe survey, titled Mission Testing vs. On-Orbit Mission Operations, was developed to test the hypothesis that extensive Mission Testing prior to launch is an important step towards improving the reliability and performance of a CubeSat mission. Participants were required to have good general knowledge of the pre-launch testing and (if launched successfully) the post-launch operations of the satellite. The survey was composed of three sections:
The final page of this survey also asked participants to consider their main lesson learned that they would apply to future test campaigns. In addition to questions with defined single and multiple choice answers, participants were also given the opportunity to elaborate on answers via More Information boxes following each question. The full survey, including all questions and the question logic applied based on the participants responses, is presented in Appendix A. This survey was conducted using the website JotForm and can be accessed at https://eu.jotform.com/form/201262644487053. Over the course of ~6 months, 8 participants anonymously took part in this study. 3.1.1ResultsThe main findings of the survey are as follows:
Table 2.Number of Mission Tests carried out by the CubeSat teams and the test durations used.
Key results from this section of the survey are summarised in Fig. 4.
3.1.2Analysis & DiscussionAn initial observation from the results presented above is that, in contrast to what the lack of literature on Mission Testing might suggest, the survey responses indicate that Mission Testing, in at least some capacity, may be performed by many CubeSat teams (75% of the participants indicated that Mission Testing was performed for their mission). The responses further show that Mission Testing is a testing technique that is considered valuable by CubeSat teams regardless of the project setting (i.e. not just a testing technique favored for industry-led projects which typically have more resources), as 50% of the CubeSats for which Mission Testing was carried out were student-led. The remaining projects were industry-based CubeSat projects. While 75% of the survey participants indicated that the ECSS definition of Mission Testing was conducted by their teams and although the main characteristics of the test are defined in ECSS and other (e.g. ISO) standards, another clear finding from the responses is that Mission Test execution varies significantly between different CubeSat teams. This can be noted throughout the results presented above but is particularly obvious for the durations of the Mission Tests carried out as well as for the number of times each team repeated Mission Testing (see Tab. 2). Tab. 4 further emphasises this point by showing the total time spent Mission Testing by each team, which spans a wide range from ~hours to ~weeks. The differences in Mission Testing noted in this survey may somewhat be explained by that fact that no participant indicated that ‘a recommended test duration was used’, which was one of a number of multiple-choice answers to the question ‘Why was this duration chosen for the Mission Test(s)?’. This suggests that each team independently defined their own the Mission Test characteristics, such as the test duration, without strong instruction from standards, requirements, etc. This all suggests that little-to-no norms around Mission Testing are shared and used by CubeSat teams when designing their tests. Is this due to the lack of literature, lack of CubeSat-specific testing standards or, as the responses of this survey allude to, is it primarily driven by the limited amount of time and resources typically invested into a CubeSat project? Table 4.Total time each team spent Mission Testing (developed from Tab. 2).
Although, by definition, less time and resources are invested into CubeSat projects compared to conventional spacecraft development projects, this survey highlights that some areas of satellite development, such as comprehensive system-level testing, including Mission Tests, are needed to improve the likelihood of mission success. The negative impacts of not investing time and resources into such tests are most clearly observed for the the CubeSat team where only shours in total were spent Mission Testing (i.e. the first team listed in Tab. 4). The participant associated with this CubeSat mission indicated that schedule constraints (i.e. ‘a launch deadline’) led to this test duration being used. While one of the main issues with a short Mission Test duration is that risks associated with age-related failures are not well mitigated, this participant identified another issue in that the team were highly limited in what could and should be tested within the time frame (e.g. the participant indicated that it was not possible to simulate any non-nominal situations in this time). Communication with this CubeSat was never confirmed following launch. The participant goes on to suggest that improved Mission Testing could potentially have prevented the loss of their mission. This survey response clearly demonstrates how the quality and characteristics of the Mission Testing performed by teams can impact the likelihood of mission success and more specifically shows that a Mission Test duration of a couple of hours is not sufficient to improve mission reliability. This response also highlights that, testing decisions, such as the amount of time allocated to a test, must be driven primarily by the test objectives and the realistic timeline in which these objectives can be achieved rather than schedule demands, for the test to have a positive impact on the reliability of the mission. As the second team listed in Tab. 4 experienced launch failure, the remaining 4 entries show the testing times for the teams who performed Mission Testing and achieved their primary mission aims. The participants from all 4 teams indicated that they had ‘mostly nominal’ on-orbit experiences and only ‘rarely’ or ‘occasionally’ encountered issues not considered for testing. Therefore, no further comparisons between the type of testing carried out by these teams and their on-orbit successes/failures can be drawn from the survey responses. However, it is worth noting that all of these teams performed, in total, at least ~days worth of Mission Testing. This test duration seems more consistent with what would be the expected duration required to achieve the objectives of the ECSS Mission Test, where both nominal and non-nominal situations expected to occur over the entire mission profile are simulated. Participants’ responses to the lessons learned question emphasised the importance of simulating on-orbit operations as realistically as possible during Mission Tests. This is consistent with the main goal that Mission Testing aims to achieve for a mission beyond other testing where, as stated above, one of the primary aims is to address potential issues related to the actual on-orbit operation of the satellite. This feedback is also largely supported by further results from the survey, presented in Tab. 3, showing that key aspects of real on-orbit operations were simulated by many of the CubeSat teams who performed Mission Testing. One noteworthy and conflicting result from Tab. 3, however, is that only 2 out of the 6 teams who performed Mission Tests incorporated limited 2-way communications (i.e. short communication ‘windows’ separated by long durations that result from the CubeSat’s orbit) between the space and ground segments into their testing. The most obvious and likely reasons for not simulating limited communications as part of a Mission Test are related to time, as simulating realistic communication constraints (only ~tens of minutes of interaction with the spacecraft per day) will increase the required test duration significantly. However, when the aim is to perform a test that is as representative as possible, a strong argument can be made that this aspect of real on-orbit operations (or, at least, a reduced version of this aspect) is as essential as any of the other aspects listed when considering the risks that are mitigated and the uncertainties that are addressed - e.g. is the spacecraft capable of performing the mission with only ~minutes of contact with the ground station per day? How robust is the spacecraft to surviving anomalies without immediate intervention from the Mission Control Team? Can enough of the data generated onboard be downlinked during the short communication windows to meet the mission requirements? Furthermore, the decision not to simulate realistic communication constraints can actually impact how representative other aspects of the Mission Test can be. For example, as the spacecraft will be transmitting data more frequently without long breaks between communication windows, the spacecraft’s power consumption will not mimic the on-orbit behaviour of the system. Additionally, if there is no constraint on communication, the Mission Control Team and protocols will not experience the many stresses related to fixed and limited communication times. This may explain why 3 out of the 4 participants from teams who performed Mission Testing and later operated their spacecraft on-orbit indicated that Mission Testing was only ‘somewhat’ representative of real operations, and may also explain the main reason behind the lessons learned from each. As only 2 participants indicated that Mission Testing was not performed for their mission, and as only one of these participants indicated that their CubeSat had been launched at the time of the response, the impacts, negative or otherwise, of not performing Mission Testing on mission reliability cannot be shown from this work. This result would require more survey respondents (see Sec. 5). However, a clear finding from the current set of responses is that CubeSat teams who perform Mission Testing rate it highly as a tool to improve the likelihood mission success (all participants for which the primary aims of the mission were achieved marked Mission Testing as a ‘significant’ contributor to their success and, for the case where the aims were not achieved, the participant believed that improved Mission Testing could have prevented the loss of their mission). Therefore, the findings of this survey support the earlier hypothesis that Mission Testing is an important step to improving the predicted reliability and performance of a mission prior to launch. 3.2EIRSAT-1 Mission TestingEIRSAT-1 is set to be Ireland’s first satellite. As such, recommendations in the form of standards, requirements and lessons learned are extremely valuable to the EIRSAT-1 team as in-house experience and expertise in satellite projects are only now being fostered. For this reason, first-time projects, like EIRSAT-1, also benefit in particular from being involved in a structured programme such as ESA’s FYS!, where teams must follow the programme’s requirements, which have been set out by CubeSat experts, and further recommendations are readily available. Some of the main recommendations related to Mission Testing that the EIRSAT-1 team have received as part of FYS! are given in Sec. 3.2.1. Support from those with expertise is especially important for testing and in particular for Mission Testing, as the team try to mitigate failures and simulate scenarios that have not yet been experienced first-hand. Therefore, in addition to the support received as part of the FYS! programme, the EIRSAT-1 team have also aimed to build knowledge from other CubeSat teams with different development, test, launch and operations experiences. This is a leading driver behind the survey presented in Sec. 3.1. The impact of the survey results to date on team’s test plans are mentioned throughout the next section, where the main characteristics of the EIRSAT-1 Mission Test campaign and the rationale behind these characteristics are discussed. 3.2.1Test PlanTable 5.Main characteristics of the EIRSAT-1 Mission Test campaign.
Mission Testing is a requirement for CubeSat teams participating in the FYS! programme, as FYS! consider it to be an essential test to perform for mission validation prior to launch. The programme recommends that Mission Tests should last a minimum duration of 1-2 weeks, uninterrupted. During this time, it is expected that both nominal and non-nominal situations are simulated, starting from launch through to the ‘normal’ mission operations phase. Similar to the feedback from the survey, FYS! stress the importance of simulating realistic on-orbit conditions as best as possible during Mission Testing - ‘test as you fly!’. Use of the mission’s real operational procedures, communication over a radio link, use of the actual ground segment set-up, etc. are all encouraged. In light of these recommendations, a Mission Test duration of ~3 weeks was chosen by the EIRSAT-1 team, as this was considered sufficient to perform the test described in this section. In an effort to be representative as possible, the EIRSAT-1 Mission Tests will include some realistic aspects of on-orbit operations, which are listed in Tab. 5. The aspects in this table include those provided as multiple-choice options to the survey participants but additionally include that the physical state of the spacecraft at different points in the mission is simulated as well as the simulation of realistic power constraints. For the former, “launch” and “deployment” into orbit will be conducted in a representative manner with respect to the spacecraft’s power systems, where deployment switches will prevent the CubeSat from powering on in its launch configuration until deployment. Furthermore, tumbling of the spacecraft at ~tens of degrees/second following deployment, and for later attitude control tests, will be simulated using a rig developed to rotate the CubeSat.28 For the latter, software to control the power supply responsible for charging the spacecraft has been developed to simulate a realistic on-orbit charging cycle, via sunlight incident of the spacecraft’s solar cells for ~45 minutes every ~90 minutes (based on a spacecraft in Low Earth Orbit). As limited 2-way communication¶ is also being simulated as part of the test, the rate of battery discharge will similarly be representative, as the spacecraft’s radio will only be transmitting large quantities of data during ~minutes-long communication windows which are separated by ~hours. A variety of nominal and non-nominal situations will be simulated during the ~3 weeks of Mission Testing, starting from pre-launch preparations where the CubeSat is prepared for shipment to the launch provider. Examples of the situations to be considered are given in Fig. 5. Nominally, multiple of these situations will be simulated during each testing day. However, non-nominal situations and any anomalies encountered (i.e. unexpected events which are being treated as part of the Mission Test - advice from the survey: ‘Don’t assume that testing covers all scenarios, test what happens when things break down unexpectedly and see if the satellite can recover.’) may require more time to resolve. Following the main Mission Test, the EIRSAT-1 team have additionally decided to test a series of worst-case, non-nominal Launch and Early Operations Phase (LEOP) situations‖, where, for instance, the CubeSat is launched with low battery or where an anomaly occurs that causes the spacecraft to reboot several times prior to antenna deployment. These situations are being considered in the context of a full Mission Test simulation, rather than e.g. as part of a unit test of the post-launch software or a Functional Test of the antenna deployment mechanism, to further mitigate risk to mission success during a highly critical point in the mission. The reasoning for this additional testing is also supported by the fact that many of the participants from the survey indicated that it was during LEOP, or soon after, that situations not considered for testing where encountered. The EIRSAT-1 team will perform Mission Tests twice, on 2 separate fully integrated models of the spacecraft known as the Engineering and Qualification Model (EQM) and the Flight Model (FM) (see Sec.4 for more information of this model philosophy). One of the main advantages to repeating the test relates again to this being a first-time project, as the EQM test will allow the EIRSAT-1 team to practice the Mission Test and develop it further for the FM. Repeating the test is also essential for the EIRSAT-1 team as some aspects of the Mission Test will only be possible for the FM test campaign (e.g. see Tab. 5). While preferably these aspects would be included in both of the Mission Tests to be performed, the associated risks and resources were assessed by the EIRSAT-1 team and the decision that the test is performed at least once in full was seen as satisfactory to meet the aims of Mission Testing, particularly given the resources being invested into each test campaign. In addition to the formal Mission Test campaigns of the EQM and FM, a number of smaller, informal ~hours-long Mission Tests, have also been carried out by the EIRSAT-1 team during the previous ~months. These tests have been conducted to facilitate final development of the hardware, software, operational procedures and test plans required for the Mission Test. Additionally, these tests have served as a training mechanism for the test operators, who should be familiar with the Mission Control protocols prior to the test. As can be seen from these details, extensive Mission Testing can be expensive on resources, particularly for a university-class CubeSat project where resources are limited. However, the findings from the survey as well as FYS!’s recommendations give the EIRSAT-1 team confidence that this investment into Mission Testing is extremely worthwhile for mission reliability. Sec. 5 of this works discusses plans to evaluate how this investment into Mission Testing succeeds in practice, given first-hand on-orbit experience with the EIRSAT-1 mission. 4.ADDITIONAL METHODS TO IMPROVE RELIABILITYThis section lists some additional methods and considerations that can be used by CubeSat teams to improve the predicted reliability of their mission:
5.FUTURE WORKThis paper presents the initial results of a survey disseminated to CubeSat teams with launch and on-orbit experience. As, to date, ~1300 CubeSats have been launched and the number of launches per year is increasing,1 the potential pool of participants for this study is sizable and steadily growing. Therefore, future work aims to build on the findings presented in this work with input from more survey participants. In addition to more survey responses, future work also aims to draw on the first-hand experiences of the EIRSAT-1 team. As part of this plan, the EIRSAT-1 team will answer in detail the questions posed to survey participants on their mission operations. The EIRSAT-1 Mission Test plan, which is presented in Sec. 3.2.1, has been developed from the support, recommendations and lessons learned mentioned in this paper. By learning from those with on-orbit experience, the team have aimed to develop test plans that will improve the predicted reliability of the mission and prepare us for launch. In light of first-hand test, launch and on-orbit operations experiences, the EIRSAT-1 team will assess whether this aim was achieved and will highlight what aspects of the Mission Test plan described in Sec. 3.2.1 were most/least beneficial and what should be improved. Using this information, a more comprehensive test guide will be developed to help teams improve the predicted reliability of their CubeSat through Mission Testing. 6.CONCLUSIONThis work supports the hypothesis that extensive Mission Testing is an important step towards improving the reliability and performance of a CubeSat prior to launch. In particular, the initial results of a survey, disseminated to CubeSat teams with launch and on-orbit experience, show that Mission Testing is viewed as a valuable tool by teams to improve their likelihood of mission success. However, the results of this survey also show that the scope of Mission Testing being performed varies significantly across CubeSat teams. This finding highlights a need for clearer guidance (e.g. in the form of standards or requirements) for CubeSat teams on the quality of the testing that should be performed to ensure a positive impact on the predicted reliability of their missions. This guidance is particularly important as CubeSat teams typically face constraints on time and resources, and so, allocation of sufficient resources to Mission Testing, with the view that comprehensive Mission Testing is an essential part of a CubeSat development project, should be encouraged. By providing details on the EIRSAT-1 test plans, this and future work aims to act as a resource for CubeSat teams to assist the development of, as well as the time required for the development of comprehensive Mission Test plans. AppendicesAPPENDIX A.MISSION TESTING VS. ON-ORBIT MISSION OPERATIONS SURVEYThis section shows the questions that were presented to survey participants as part of Sections 1) Background, 2) CubeSat Testing and 3) Mission Operations. Participants were not required to answer all questions. However, responses for up to 5 questions, which have been highlighted below with red asterisks, were required to facilitate question logic. Participants where also given the opportunity to expand on answers as desired via More Information boxes following each question. A.1 Background A.2 CubeSat Testing A.3 Mission Operations Notes[1] Note that some terms used in this work, such as ‘Functional Testing’ and ‘Mission Testing’ are commonly but not exclusively used by other CubeSat projects to describe similar testing. [2] While the term ‘Mission Test’ may not specifically be used, similar testing aims and methods are described. ACKNOWLEDGMENTSWe kindly thank all those who have participated in the Mission Testing vs. On-Orbit Mission Operations survey to date. We acknowledge all students who have contributed to EIRSAT-1. The EIRSAT-1 project is carried out with the support of ESA’s Education Office under the Fly Your Satellite! 2 programme. This study was supported by the European Space Agency’s Science Programme under contract 4000104771/11/NL/CBi. The EIRSAT-1 team further acknowledge support from Parameter Space Ltd. MD, RD, DM, LS and JT acknowledge support from the Irish Research Council (IRC) under grants GOIP/2018/2564, GOIPG/2019/2033, GOIPG/2014/453, GOIPG/2017/1525 and GOIPG/2014/684, respectively. DM, AU and JM acknowledge support from Science Foundation Ireland under grant 17/CDA/4723. SW acknowledges support from the European Space Agency under PRODEX contract number 400012071. JE and JR acknowledge scholarships from the UCD School of Physics. LH acknowledges support from SFI under grant 19/FFP/6777. REFERENCESKulu, E.,
“Nanosats database.,”
(2021) https://www.nanosats.eu/ Google Scholar
Swartwout, M. A.,
“Cubesat database,”
(2021) https://sites.google.eom/a/slu.edu/swartwout/home/cubesat-database Google Scholar
Cal Poly,
“CubeSat Design Specification - (1U-12 U) - Rev 14,”
CP-CDS-R14,
(2020). Google Scholar
Straub, J., Villela, T., Costa, C. A., Brandão, A. M., Bueno, F. T., and Leonardi, R.,
“Towards the thousandth CubeSat: A statistical overview,”
International Journal of Aerospace Engineering, 5063145
(2019). Google Scholar
Shkolnik, E. L.,
“On the verge of an astronomy CubeSat revolution,”
Nature Astronomy, 2
(5), 374
–378
(2018). https://doi.org/10.1038/s41550-018-0438-8 Google Scholar
Woellert, K., Ehrenfreund, P., Ricco, A. J., and Hertzfeld, H.,
“Cubesats: Cost-effective science and technology platforms for emerging and developing nations,”
Advances in Space Research, 47
(4), 663
–684
(2011). https://doi.org/10.1016/j.asr.2010.10.009 Google Scholar
Schoolcraft, J., Klesh, A. T., and Werne, T.,
“MarCO: Interplanetary Mission Development On a CubeSat Scale,”
in SpaceOps Conference,
2016
–2491
(2016). Google Scholar
Swartwout, M.,
“The First One Hundred CubeSats: A Statistical Look,”
Journal of Small Satellites, 2
(2), 213
–233
(2013). Google Scholar
Murphy, D., Joe, F., Thompson, J. W., Doyle, M., Erkal, J., Gloster, A., O’Toole, C., Salmon, L., Sherwin, D., Walsh, S., de Faoite, D., McBreen, S., McKeown, D., O’Connor, W., Stanton, K. T., Ulyanov, A., Wall, R., and Hanlon, L.,
“EIRSAT-1 - The Educational Irish Research Satellite,”
in 2nd Symposium on Space Educational Activities,
2018
–73
(2018). Google Scholar
Vanreusel, J.,
“Fly Your Satellite! The ESA Academy CubeSats programme,”
in ITU Symposium & Workshop on Small Satellite Regulation and Communication Systems,
(2016). Google Scholar
Doherty, K. A. J., Dunne, C. F., Norman, A., McCaul, T., Twomey, B., and Stanton, K. T.,
“Flat Absorber Coating for Spacecraft Thermal Control Applications,”
Journal of Spacecraft and Rockets, 53
(6), 1035
–1042
(2016). https://doi.org/10.2514/1.A33490 Google Scholar
Doherty, K., Twomey, B., McGlynn, S., MacAuliffe, N., Norman, A., Bras, B., Olivier, P., McCaul, T., and Stanton, K.,
“High-Temperature Solar Reflector Coating for the Solar Orbiter,”
Journal of Spacecraft and Rockets, 53 1
–8
(2016). https://doi.org/10.2514/1.A33561 Google Scholar
Sherwin, D., Thompson, J., McKeown, D., O’Connor, W., and Úbeda, V.,
“Wave-based attitude control of EIRSAT-1, 2U CubeSat,”
in 2nd Symposium on Space Educational Activities, SSEA-2018-93,
(2018). Google Scholar
Murphy, D.,
“A compact instrument for gamma-ray burst detection on a Cubesat platform I: Design drivers and expected performance,”
(2020). Google Scholar
Mészáros, P.,
“Gamma-ray bursts,”
Reports on Progress in Physics, 69
(8), 2259
–2321
(2006). https://doi.org/10.1088/0034-4885/69/8/R01 Google Scholar
Gebara, C. A. and Spencer, D.,
“Verification and Validation Methods for the Prox-1 Mission,”
in 30th Annual AIAA/USU Conference on Small Satellites, SSC16-VIII-3,
(2016). Google Scholar
International Organization for Standardization,
“Space systems - Design qualification and acceptance tests of small spacecraft and units,”
(2017). Google Scholar
Tiseo, B., Quaranta, V., Bruno, G., and Sisinni, G.,
“Tailoring of ECSS Standard for Space Qualification Test of CubeSat Nano-Satellite,”
International Journal of Aerospace and Mechanical Engineering, 13
(4), 295
–302
(2019). Google Scholar
Walsh, S., Murphy, D., Doyle, M., Thompson, J., Dunwoody, R., Emam, M., Erkal, J., Flanagan, J., Fontanesi, G., Gloster, A., Mangan, J., O’Toole, C., Okosun, F., Rajagopalan Nair, R., Reilly, J., Salmon, L., Sherwin, D., Cahill, P., de Faoite, D., Javaid, U., Hanlon, L., McKeown, D., O’Connor, W., Stanton, K., Ulyanov, A., Wall, R., and McBreen, S.,
“Assembly, Integration, and Verification Activities for a 2U CubeSat, EIRSAT-1,”
in 3rd Symposium on Space Educational Activities,
128
–132
(2020). Google Scholar
Mangan, J., Murphy, D., Dunwoody, R., Ulyanov, A., Thompson, J., Javaid, U., O’Toole, C., Doyle, M., Emam, M., Erkal, J., Fontanesi, G., Kyle, J., Marshall, F., Rajagopalan Nair, R., Okosun, F., Reilly, J., Walsh, S., de Faoite, D., Salmon, L., Hanlon, L., McKeown, D., O’Connor, W., Wall, R., Shortt, B., and McBreen, S.,
“The Environmental Test Campaign of a Novel Gamma-Ray Detector for a CubeSat,”
in International Conference on Space Optics,
(2021). Google Scholar
European Cooperation For Space Standardisation,
“Space Engineering: Testing,”
(2012). Google Scholar
NASA,
“CubeSat 101: Basic Concepts and Processes for First-Time CubeSat Developers, 1st ed,”
(2017). Google Scholar
Venturini, C.,
“Improving Mission Success of CubeSats,”
U.S. Space Program Mission Assurance Improvement Workshop (MAIW),
(2017). Google Scholar
Langer, M. and Bouwmeester, J.,
“Reliability of CubeSats - Statistical Data, Developers’ Beliefs and the Way Forward,”
in 30th Annual AIAA/USU Conference on Small Satellites, SSC16-X-2,
(2016). Google Scholar
European Space Agency,
“Tailored ECSS Engineering Standards for In-Orbit Demonstration CubeSat Projects,”
TEC-SY/128/2013/SPD/RW,
(2016). Google Scholar
Jain, V., Bindra, U., Murugathasan, L., Newland, F., and Zhu, Z. H.,
“Practical Implementation of Test-As-You-Fly for the DESCENT CubeSat Mission,”
in SpaceOps Conference, AIAA 2018-2691,
(2018). https://doi.org/10.2514/MSPOPS18 Google Scholar
Alanazi, A. and Straub, J.,
“Statistical Analysis of CubeSat Mission Failure,”
in 32nd Annual AIAA/USU Conference on Small Satellites, SSC18-WKII-04,
(2018). Google Scholar
Dunwoody, R., Thompson, J. T., Sherwin, D., Doyle, M., Emam, M., Erkal, J., Flanagan, J., Fontanesi, G., Gloster, A., Mangan, J., Murphy, D., Okosun, F., O’Toole, C., Rajagopalan Nair, R., Reilly, J., Salmon, L., Walsh, S., Cahill, P., de Faoite, D., Javaid, U., O’Connor, W., Stanton, K., Ulyanov, A., Wall, R., Hanlon, L., McBreen, S., and McKeown, D.,
“Design and development of a 1-axis attitude control testbed for functional testing of EIRSAT-1.,”
in 3rd Symposium on Space Educational Activities,
171
–175
(2020). Google Scholar
Doyle, M., Gloster, A., O’Toole, C., Mangan, J., Murphy, D., Dunwoody, R., Emam, M., Erkal, J., Flanaghan, J., Fontanesi, G., Okosun, F., Rajagopalan Nair, R., Reilly, J., Salmon, L., Sherwin, D., Thompson, J., Walsh, S., de Faoite, D., Javaid, U., McBreen, S., McKeown, D., O’Callaghan, D., O’Connor, W., Stanton, K., Ulyanov, A., Wall, R., and Hanlon, L.,
“Flight Software Development for the EIRSAT-1 mission,”
in 3rd Symposium on Space Educational Activities,
157
–161
(2020). Google Scholar
Langer, M., Weisgerber, M., Bouwmeester, J., and Hoehn, A.,
“A reliability estimation tool for reducing infant mortality in Cubesat missions,”
in 2017 IEEE Aerospace Conference,
1
–9
(2017). Google Scholar
Weisgerber, M., Langer, M., Schummer, F., and Neumann, S.,
“Reliability prediction of student-built CubeSats,”
(2018). Google Scholar
Berthoud, L., Swartwout, M., Cutler, J., Klumpar, D., Larsen, J., and Nielsen, J.,
“University CubeSat project management for success,”
in 33rd Annual AIAA/USU Conference on Small Satellites,
(2019). Google Scholar
Slavinskis, A., Pajusalu, M., Kuuste, H., Ilbis, E., Eenmäe, T., Sünter, I., Laizans, K., Ehrpais, H., Liias, P., Kulu, E., Viru, J., Kalde, J., Kvell, U., Kütt, J., Zalite, K., Kahn, K., Lätt, S., Envall, J., Toivanen, P., Polkko, J., Janhunen, P., Rosta, R., Kalvas, T., Vendt, R., Allik, V., and Noorma, M.,
“ESTCube-1 in-orbit experience and lessons learned,”
IEEE Aerospace and Electronic Systems Magazine, 30
(8), 12
–22
(2015). https://doi.org/10.1109/MAES.2015.150034 Google Scholar
Rughani, R., Rogers, R., Allam, J., Narayanan, S., Patil, P., Clarke, K., Lariviere, M., Plessis, J., Na, L., Healy, D., Bernstein, S., and Barnhart, D.,
“Improved CubeSat Mission Reliability Using a Rigorous Top-Down Systems-Level Approach,”
(2019). Google Scholar
|