ASEE Computers in Education Journal https://coed.asee.org ASEE's Computers in Education Journal Mon, 01 Jul 2024 17:57:39 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.6 https://coed-journal.org/wp-content/uploads/2020/08/cropped-CoED-Journal-Favicon-32x32.png ASEE Computers in Education Journal https://coed.asee.org 32 32 Active Learning Undergraduate Course on UAV Path Planning and Tracking Using Numerical Simulation https://coed-journal.org/2024/03/30/active-learning-undergraduate-course-on-uav-path-planning-and-tracking-using-numerical-simulation/ Sat, 30 Mar 2024 00:01:53 +0000 https://coed-journal.org/?p=4394 This paper presents the use of numerical simulation tools developed in MATLAB and Simulink for the design and implementation of an undergraduate course, introducing students to the path planning and trajectory tracking of unmanned aerial vehicles (UAV). The course is part of an aerospace engineering emphasis area; however, with minimal flight dynamics background, it is beneficial to students in related disciplines relevant to UAVs. The major classes of UAV path generation and trajectory tracking algorithms are introduced. Significant design issues and their implications are discussed and illustrated through numerical simulation. Course assignments use active and experiential learning approaches encouraging student creativity and initiative. They involve investigating algorithm alternatives and UAV diverse operational conditions beyond nominal, including control surface failures and adverse atmospheric phenomena. Students are required to solve open-ended problems and design, execute, and analyze simulation experiments in the process. Direct assessment by the instructor and student feedback confirm that advanced numerical simulation increases student motivation and facilitates learning. It represents an effective support for active and experiential learning methodologies.

The post Active Learning Undergraduate Course on UAV Path Planning and Tracking Using Numerical Simulation appeared first on ASEE Computers in Education Journal.

]]>

The post Active Learning Undergraduate Course on UAV Path Planning and Tracking Using Numerical Simulation appeared first on ASEE Computers in Education Journal.

]]> RIOS: A Cooperative Multitasking Scheduler in Source Code for Teaching and Implementing Concurrent or Real-Time Software https://coed-journal.org/2024/03/30/rios-a-cooperative-multitasking-scheduler-in-source-code-for-teaching-and-implementing-concurrent-or-real-time-software/ Sat, 30 Mar 2024 00:01:35 +0000 https://coed-journal.org/?p=4400 Embedded systems often implement multiple concurrent tasks of varying priority through the use of a real-time operating system (RTOS). However, an RTOS may introduce overhead, complexity, and maintenance issues. For embedded system applications whose tasks don't heavily compete with one another, an alternative approach is to write the tasks to be cooperative: For each call to the task, the task runs quickly, and then returns so other tasks can execute. For such cooperative tasks, a programmer may then write their own task scheduler in the application's course code like C. However, no common approach exists, and thus embedded programmers sift through myriad online articles and examples, many of which discuss but do not provide scheduler code, provide code only for same-period cooperative tasks, or provide code that is rather complex to learn. To remedy this situation, we introduce scheduler code designed to be ultra-simple to learn and use for the most common cases of cooperative multitask applications. The scheduler code, called RIOS (Riverside/Irvine OS), is written in C but that can be implemented in languages like C++, Java, Python, Javascript, etc. RIOS can be copy-pasted directly into a project's source code, and modified as desired. Via aggressive simplification over several years, the base scheduler code has fewer than 30 lines in C. We describe the core features of RIOS. We also summarize college class experiences with 70+ students showing most students could extend RIOS for various purposes, such as an extension to enable/disable any task (100% success among students), switch tasks between two periods (98% success), add a priority field and sort by priority (94% success), and calculate utilization and jitter (65-70% success). RIOS is used by dozens of universities to teach real-time software concepts to thousands of students, and is used by hundreds of embedded systems engineers in practice.

The post RIOS: A Cooperative Multitasking Scheduler in Source Code for Teaching and Implementing Concurrent or Real-Time Software appeared first on ASEE Computers in Education Journal.

]]>

The post RIOS: A Cooperative Multitasking Scheduler in Source Code for Teaching and Implementing Concurrent or Real-Time Software appeared first on ASEE Computers in Education Journal.

]]> Using Active Learning to Connect Entrepreneurial Mindset to Software Engineering https://coed-journal.org/2024/03/30/using-active-learning-to-connect-entrepreneurial-mindset-to-software-engineering/ Sat, 30 Mar 2024 00:01:13 +0000 https://coed-journal.org/?p=4398 The purpose of this research was to develop classroom project modules that supported students in developing an entrepreneurial mindset in the context of software engineering. The modules connect the software development life-cycle from beginning to end including user focused requirements elicitation and evaluating quality attributes. The modules were implemented in a junior level software engineering course, and three modules were surveyed in 2019 as part of a school-wide effort to embed entrepreneurial mindset into engineering curriculum. An IRB approved, student survey was developed and measured student perceptions of learning objectives that tie directly into ABET accreditation outcomes. Students reported they found the activities most helpful for designing, building, and testing real world systems.

The development of these modules was a component of increasing the process focus of the software engineering course by implementing a novel version of agile software development with active learning techniques. The practical experiments in this course taught from 2017 through 2021 allow us to report extensions and variants for adapting this design to existing software engineering courses at other universities. Among these variants we propose adopting class-wide teams which is atypical at other universities for junior-level project courses.

Qualitatively, we found that the student work completed in these modules to be higher quality than similar work submitted in prior years. Exam scores were improved when measuring students ability to create use cases, especially clarity and completeness.
Student performance was greatly improved when writing use cases, especially clarity and completeness which was reflected in improved projects.
Quantitatively, the same mindset objectives were assessed in other course modules as part a larger curriculum wide effort in Engineering. The numerical results indicate that the modules in this course outperformed other modules in the curriculum for most of the mindset objectives. Ultimately, the results indicate these types of modules may play an important role in entrepreneurial mindset development for computer science students.

The post Using Active Learning to Connect Entrepreneurial Mindset to Software Engineering appeared first on ASEE Computers in Education Journal.

]]>

The post Using Active Learning to Connect Entrepreneurial Mindset to Software Engineering appeared first on ASEE Computers in Education Journal.

]]> An Investigation into Peer-Assisted Learning in an Online Lab Environment https://coed-journal.org/2024/03/30/an-investigation-into-peer-assisted-learning-in-an-online-lab-environment/ Sat, 30 Mar 2024 00:01:11 +0000 https://coed-journal.org/?p=4396 Peer learning is one method to encourage meaningful learning in electrical engineering courses. It involves the sharing of ideas, knowledge, and experiences and emphasizes interpersonal learning. However, there are different viewpoints in relation to the best way to implement and assess peer learning in a lab environment, and contemporary literature on online laboratories (OL) rarely explores peer learning opportunities. In this paper, we aim to investigate the benefits of students’ peer learning activity in an online electronics lab course. The key challenge was whether the OL could ensure smooth communication and collaboration between the students. In our case, we applied Zoom online conferencing software as a communication tool and LabsLand as an interactive OL tool. Specifically, we used a remote lab application for electrical circuit building, which makes physically existing lab infrastructure remotely usable through an online user interface. Methods we used to assess our learning outcomes included online surveys, online lab usage, and lab report scores. The survey results showed there are positive opinions about component skill development from group lab activities. In an online lab, the tasks were divided based on team members’ strengths. In terms of peer learning, some students felt there was an improvement in partners’ skills in terms of the circuit assembly. The OL usage showed a high level of engagement in group activity. Students willingly spent more time on lab experiments beyond regular lab hours. The scores of lab reports showed this new way of peer learning could achieve learning outcomes comparable to conventional, physical labs using peer learning. Accordingly, we concluded that the OL was an alternative and effective way to encourage peer learning.

The post An Investigation into Peer-Assisted Learning in an Online Lab Environment appeared first on ASEE Computers in Education Journal.

]]>

1 Introduction

Peer learning is a type of collaborative learning that involves students working in small groups to discuss concepts or find solutions to problems (Innovation CfT2020). Peer learning has been demonstrated to be a promising method to improve students’ academic performance in STEM courses (Topping2005). For example, Beer and Jones (2008) and Pålsson et al. (2017) found that peer learning improves nursing students’ self-efficacy to a greater degree compared to conventional supervised learning. More generally, Beer and Jones (2008) list major benefits of being part of an effective peer learning network such as additional assistance with challenges, especially from peers; more perspectives on solving problems; better access to expertise; more meaningful participation in the group work; and feeling a stronger sense of identity within the study discipline and overall university life. While most of the peer learning activities have been conducted in the face-to-face format, a major shift to distance learning has been happening during the COVID-19 pandemic.

However, even without considering COVID-19, recent technological advancement has dramatically changed the landscape of higher education, particularly with respect to online learning. An increasing number of universities offer online courses, enrollments in online courses are steadily on the rise, and 2012 was regarded as the “Year of the MOOC” (massive open online course) (Pappano2012). As a result, the rapid development of online teaching formats has raised many research interests in the area of digital instructional design (Hone and El Said2016Guo et al.2014Breslow et al.2013Christensen et al.2013). One challenge of online learning is helping the online student to establish a social presence (Madhavan and Lindsay2014). Alkhaldi et al. (2016) pointed out that research was lacking regarding how to effectively incorporate meaningful collaboration between students in online environments, which was one way to improve social presence. Alkhaldi et al. (2016) suggested educators should take advantage of technological advances to implement innovative online labs, and student collaboration in online labs was deemed to be one of the areas that is interesting to investigate further.

2 Literature Review

Researchers have pointed out that online learning in technology-enhanced environments could effectively support STEM learning.Arguedas-Matarrita et al. (2017) evaluated the potential use of the online lab tool in a training workshop for schoolteachers in Costa Rica. The schoolteachers’ feedbacks were positive, and they would like to use this tool in future teaching activities. Grodotzki et al. (2018) designed a remotely-operated testing cell. In general, the participants showed good interest in this online format. Faulconer et al. (2018) ran a comparison study between online and in-person chemistry labs for a sample size of 823. The study showed students, who took in-person chemistry lab, tended to get fewer “A”s and more “D”s than their online counterparts. Tejado et al. (2019) presented a VL (Virtual Laboratory) as an interactive tool to support learning in systems theory-related courses at the University of Extremadura from 2015 to 2016. The use of VL was helpful for students to understand the basic concepts of modeling linear dynamical systems. Diaz et al. (2013) presented the design and development of the MOOC for learning industrial electronic circuits. The pilot course had 2000 enrollments.

Generally speaking, online labs provide a platform for students to learn synchronously and asynchronously with either remotely accessible or fully virtual lab equipment, independently from typical constraints that can be found with classical, in-person lab courses, such as time, space, and resource constraints. Online labs could, hence, help the students learn independently outside scheduled lab times. Over the last decade, more literature review papers summarized the current state-of-the-art in terms of online lab instruction in engineering and science education(Potkonjak et al.2016Brinson2015Hernández-de Menéndez et al.2019Nikolic et al.2021). Almost all studies comment on the highly diverse research results that stem from diverse lab technologies, the variety of lab application strategies in the curricula, and, nonetheless, different instructional goals connected to each lab. Another important observation shared by the studies was the common lack of peer-to-peer social interaction in many online lab applications. Studies showed peer learning could help to establish a social presence (Aragon2003Lowenthal and Dunlap2020). However, as the lower social presence is a challenge for online learning (Bali and Liu2018Kaufmann and Vallade2020) in general, this is also true for online lab activities. So far, only a few studies investigated the benefits of integrating peer learning in an online lab environment and compared the learning outcomes with those of physical environments.

This study hopes to fill these gaps in the literature by using mixed methods to study a novel implementation of peer learning in the online electronic circuit lab. In assessing our lab implementation, we sought to answer the following research questions about students’ peer learning experiences in online labs as compared to their peer learning experiences in the initial physical labs:

1.
How did peer learning in an online lab affect students’ perception of both their and their partner’s circuits-related skill development?
2.
How did peer learning in an online lab affect how students distributed lab tasks?
3.
How did having access to an always-available online lab affect students’ engagement outside class time?

This study allowed us to understand how physical learning environments can be transformed into online peer-learning environments. This paper focuses on the quantitative analysis of the online lab study; a separate publication was on preliminary insights to meet the tight deadline of an early COVID-19 special issue, and this paper builds upon that work by answering new research questions. (Li et al.2020).

3 Methods and Context

3.1 Class Setup

The lab instruction of a class of 38 engineering students in Spring 2020 was initially conducted in the physical circuit lab. The course topic was fundamental circuit assembly and analysis. When the pandemic began, the labs were switched to an online format. Zoom Meetings software was used as an interactive communication platform. In particular, breakout rooms were used to allow students to work in small teams of 2-3 students on lab activities after a brief introduction (Figure 1a). The lab activities were divided into four different tasks: circuit measurement, circuit calculation, circuit simulation, and circuit assembly. The students were encouraged to select a team leader to distribute the tasks to individuals.

Circuit assembly was conducted via the Virtual Instrument Systems in Reality (VISIR) module in an online lab platform called LabsLand® . The VISIR module enabled students to assemble and measure a circuit using a realistic circuit board interface (Figure 1b). The circuit created by students was then automatically tested in a dedicated laboratory at the University of Deusto in Spain, and the outcome was communicated back to the students. Students had access to the online lab around the clock, though they were only expected to collaborate during the synchronous lab periods over Zoom. Combining Zoom and VISIR had many benefits in encouraging students’ peer learning. For example, VISIR offered a top-down view of the circuit board. All the group members could have a clear picture of the progress of the circuit assembly via the “screen sharing” feature over Zoom. One student could connect or disconnect the electronic components while others offered real-time advice, and the course instructor could conveniently assess a circuit visually by visiting students’ breakout rooms. Meanwhile, the LabsLand®  platform offered unlimited access to the module and circuit boards, which could not be achieved by the physical labs the course originally used. During COVID-19, VISIR offered a safe and convenient way to perform electronic labs.

PIC

(a) Lab workflow
 

PIC

(b) Online lab interface
Figure 1. Shows (a) the online lab workflow and (b) the online lab interface.

3.2 Assessment and Research

Three types of data were collected: online surveys, lab report scores, and online lab usage data. These three assessment methods are summarized in Figure 2 The physical lab survey was administered before the online lab.

PIC

(a) Lab survey and score
 

PIC

(b) Lab usage
Figure 2. Shows three assessment methods, (a) is the procedure for collecting lab surveys and scores, and (b) is the lab usage example.

3.2.1 Lab Surveys

To assess student perceptions of skill development and task distribution among team members, we collected student survey data in two phases (Fig. 2a). The survey had quantitative and qualitative parts. Analysis of quantitative survey results included graphing the data, as well as paired descriptive and inferential statistics (e.g., paired t-test) to compare student responses between the two surveys. Qualitative survey results were analyzed using an open coding method. Table 1 exhibits the quantitative survey questions relevant to the research questions of this study. First, we collected online survey data from students regarding their experience in the physical lab. Second, we collected online survey data from students using the same instruments at the end of the semester in the online lab.

Table 1. Survey questions relevant to the research questions of this study

Question

Scale

Items

How knowledgeable would you consider yourself now regarding the following [physical/online] lab skills?

1 (Novice) – 10 (Expert)

Circuit Assembly
Circuit Measurement
Circuit Calculation
Circuit Simulation

How knowledgeable would you consider your lab partner(s) now regarding the following [physical/online] lab skills?

1 (Novice) – 10 (Expert)

Circuit Assembly
Circuit Measurement
Circuit Calculation
Circuit Simulation

How does your level of knowledge regarding lab skills affect how you and your partner divide labor during labs?

Open answer (qualitative)

3.3 Lab score

To supplement student perceptions of skill development, we also compared student lab report scores in a lab that was structured similarly in both the physical and online labs. The lab instructions, as shown in Table 2, were given to the students, and the student lab reports were compared. The similarity between the physical and online labs was the objectives are related to the Wheatstone bridge. The differences between those two labs are the lab objectives: the physical lab aims to measure the unknown resistance using an assembled Wheatstone bridge, while the online lab aims to design and build a Wheatstone bridge from scratch.

Table 2. Lab instructions for physical and online labs.

Physical Lab
Online Lab

Prompt

Your task in this project is to measure the unknown resistor of a Wheatstone bridge.

Your task in this project is to design and build a balanced Wheatstone bridge using four resistors.

Steps

– Construct a sample circuit using a Multisim®  circuit simulator

– Measure the unknown resistor in the electronics laboratory

– Submit a structured technical report after the lab

– Verify your design using a Multisim®  circuit simulator

– Construct and test your designed

circuit in LabsLand® 

– Submit a structured technical report after the lab

3.4 Lab usage

The online lab presented students with a learning curve, especially on the user interface. Also, only one student from each lab group could access the interface during lab hours. We posited that students’ ability to revisit the software outside of designated class time would be helpful for some students. Accordingly, we analyzed the usage data for those who logged into the class online lab during each hour of the week. Figure 2b shows an example of the lab usage data, which is presented in terms of the number of students using LabsLand®  in the hourly slots.

4 Results

4.1 Survey Results

Table 3 summarizes the students’ responses to the skill development questions. The number range for the answer is 1 (novice) to 10 (expert). There were some significant differences in specific lab skills—particularly, some students felt less competent in circuit measurement and circuit calculation when working in the online lab environment. However, they felt their partners’ skills in circuit assembly increased in the meantime. However, in those cases, the differences between the means of numerical responses were small. Our sample size (n=31) was small as seven students chose not to participate in the survey. We opted for a relatively large, exploratory level of significance of less than 0.15. The P-value was calculated using paired sample T-test.

Table 3. Summary of the skill development responses (n=31)

Component skills

P-value

Mean

physical lab

Mean

online lab

Significantdifference?

Personal skill development

Circuit measurement

0.04

7.50

6.73

Yes

Circuit calculation

0.07

7.62

7.00

Yes

Circuit assembly

0.38

6.15

6.54

No

Circuit simulation

0.71

7.23

7.35

No

Partner’s skill development

Circuit measurement

1

7.35

7.35

No

Circuit calculation

0.65

7.23

7.42

No

Circuit assembly

0.11

7.65

8.35

Yes

Circuit simulation

0.31

7.85

7.50

No

Our online lab structure was designed for a group of students who have complementary lab skills, so they can efficiently distribute the tasks according to their strengths, as shown in 52% of the students’ responses in Figure 3. We believed that the affordances of the online environment encouraged students to adopt more well-defined roles and choose tasks based on their strengths; particularly, only one person was able to work with the software at a time.

PIC

Figure 3. How students reported dividing work among their teams in physical vs. online lab environments.

4.2 Lab scores

Despite the small differences in students’ perceptions of their lab skills, there was no significant difference between student lab scores in the online lab and physical lab (Table 4). The mean score for the physical lab was 96.1, while the mean score for the online lab was 97.8. This result potentially suggested that students could achieve comparable outcomes via peer learning in an online lab environment to a physical learning environment. As mentioned before, the lab exercise was divided into four components to assess the students’ learning: measurement, calculation, simulation, and assembly. The online lab structure, as well as the grading rubric, are attached in Appendix A. The exception is that they are allowed to make up the labs due to internet disruption.

Table 4. Lab score comparison between the physical and online labs (n=38)
Physical lab score
Online lab score
Mean 96.1 Mean 97.8
Standard deviation (±) 7.8 Standard deviation 5.3

4.3 Lab usage

In this study, the regular lab hours are 12:30 pm to 2:00 pm every Tuesday and Thursday. Table 5 shows the usage of LabsLand®  is still quite high outside the normal lab hours. During lab hours, only one student could operate the LabsLand®  while others were instructing. Like other always-access systems, Labsland®  offers an opportunity for a substantial subset of students who could practice lab skills or revisit lab activities outside lab hours.

Table 5. Frequency of student logins outside lab hours (Tuesday and Thursday)
Day of the week Monday Tuesday Wednesday Thursday Friday Saturday Sunday
Usage 1 6 10 19 5 0 2

5 Discussion

The result showed that the synchronous online lab setup was successful in supporting student learning nearly as well as a physical lab environment while providing the benefit of always-accessible lab activities that many students used. (Figure 3). Our study provides some insights into the potential benefits of peer learning in an online environment. Based on what we found in this study, we confirm that the students tend to divide their labor based on their own skill sets in the online lab environment. Compared with the physical lab, more students were willing to divide the tasks based on individual strengths, as the proportion increased from 24 % to 52 % (Figure 3). Based on that, the student could only partially acquire some lab skill sets. Table 3 showed students perceived that both the skills of circuit measurement and calculation had decreased noticeably for one of the group members. Those results suggest that online lab activities may hamper the development of some necessary lab skills in favor of those on which each student chose to focus.

On the other hand, having an always-available online lab could mitigate the negative impact on individual skill development and allow the students to have more independent learning experiences. The online environment provides the out-of-school learning opportunity as a complementary and reinforcement agent, which broadens the learning environment for the student and provides novelty in education Kaya and Dönmez (2009); May et al. (2020); Loro et al. (2018). Marques et al. (2014) stated that the VISIR lab gives extra accessibility and flexibility to the students. In our study, according to the lab usage data, the students took advantage of the always-accessible lab outside the classroom (Table 5). However, there is no solid evidence to prove they could potentially fill the skill gaps by doing so. Accordingly, more structured out-of-class activities may be necessary to engender the outcome of all students achieving an acceptable level of proficiency in all essential lab skills.

6 Conclusion

This study demonstrated that online labs have numerous benefits for the pandemic and post-pandemic classroom learning. On the learners’ side, Labsland®  was a cost-effective tool that allowed the students to sharpen their lab skills or enhance their learning of engineering principles by revisiting the lab content in their own time. The online lab structure was designed to solve engineering problems collaboratively while improving individual skill sets. Through peer learning, some students saw improvement in their peers in terms of the circuit assembly.

Online collaboration encouraged more role-playing and task-specific teamwork, which has been shown as a hallmark of successful workplace teams. In the existing lab setting, one student worked on circuit assembly while others provided real-time feedback over Zoom. Therefore, to optimize the team efficiency, the team leader would assign one member with more experience in circuit assembly to work on Task 3 (Appendix A). That could also potentially place more burdens on the team leader. A possible improvement is to facilitate team formation on the instructor’s side before task assignment. The instructor could pick the students with complementary skillsets for the teams and potentially improve their personal skill development (Table 3). On the other hand, Labsland offered an immersive experience that gave students a feeling of being present in a hands-on physical lab. The instructors could design an online lab using Labsland with learning objectives from the physical labs.

The assessment data, including survey, lab usage, and lab score, were used to evaluate the peer-learning in the online environment. In a nutshell, this study indicated that combining the online learning tools of Zoom meetings and VISIR lab software is an effective way to support student learning in online environments. Individual lab skills could be further improved by more structured out-of-class self-learning activities.

Acknowledgment

The Institutional Review Board of the University of Georgia approved this research under protocol ID PROJECT00001996. The author team is not affiliated with LabsLand®  beyond the use and study of its virtual lab services.

A Appendix

ENGR 2170: Circuits
Online Laboratory: AC and Wheatstone Bridge

Objective

This lab will use the knowledge of circuit laws. Before the lab, please do the following pre-planning activity:

1.
Ask your lab partner about their pre-knowledge of Thevenin’s theorem.
2.
Vote to have a team leader.
3.
The team leader distributes Task 1-3 inside the group.
4.
Report the progress with each other every 15 minutes.

Task 1: Calculation

See below for the circuit given in Figure A1.

PIC

Figure A1. circuit diagram for calculation task
1.
Calculate the Thevenin equivalent circuit of the circuit above
2.
What is the voltage across Terminal a-b?
3.
What is the value of gain K in this case?
Task 2: Multisim®  simulation

An AC bridge is used in measuring the inductance L of any inductor or capacitance C of a capacitor.

PIC

Figure A2. AC bridges used to measure unknown capacitors and inductors

The following elements in Table A1 are provided.

Table A1. Elements provided for circuit construction.
Element Value
Resistors 1-5kΩ
Capacitors 1-5F
Inductors 1-5H
DC Source any
AC Source any

Construct an AC bridge so that the voltage reading of the AC meter is zero at location 2.

PIC

Figure A3. A sample Multisim®  circuit diagram
4.
What resistors, capacitors, or inductors did you choose to build the circuit?
5.
Take a picture of your measured outcome at location 1 and location 2.
6.
If you change the AC source to a DC source of the same voltage, what is the voltage reading between terminal a-b?
Task 3: Labsland®  assembly

Now, use the component in Labsland®  to build a Wheatstone bridge using a function generator, and four resistors of your own choice. Set the function generator to be a sine wave.

PIC

Figure A4. One example of the Labsland®  circuit
7.
what did you observe on channel 1 of the O-scope?

PIC

Figure A5. One of the expected results
8.
what did you observe on channel 2 of the O-scope?
9.
What is the gain between the input signal obtained from channel 1 and the output signal from channel 2?
Grading criteria for a lab report:
  • 10% of the mark given to effective communication, i.e., good writing
  • 30% of the mark given to completion of task 1
  • 30% of the mark given to completion of task 2
  • 30% of the mark given to completion of task 3

References

   Innovation CfT, “Collaborative learning,” 2020, accessed: 10-21-2020. [Online]. Available: https://teaching.cornell.edu/teaching-resources/engaging-students/collaborative-learning

   K. Topping, “Trends in peer learning,” Educational Psychology, vol. 25, pp. 631–645, 2005.

   C. Beer and D. Jones, “Learning networks: Harnessing the power of online communities for discipline and lifelong learning,” in 2008 Lifelong Learning Conference, 2008, pp. 66–71.

   Y. Pålsson, G. Mårtensson, C. Swenne et al., “A peer learning intervention for nursing students in clinical practice education: A quasi-experimental study,” Nurse Education Today, vol. 51, pp. 81–87, 2017.

   L. Pappano, “The year of the mooc,” The New York Times, vol. 2, 2012, 2012.

   K. Hone and G. El Said, “Exploring the factors affecting mooc retention: A survey study,” Computers & Education, vol. 98, pp. 157–168, 2016.

   P. Guo, J. Kim, and R. Rubin, “How video production affects student engagement: An empirical study of mooc videos,” in Proceedings of the first ACM conference on Learning@ scale conference, 2014, pp. 41–50.

   L. Breslow, D. Pritchard, J. DeBoer et al., “Studying learning in the worldwide classroom research into edx’s first mooc,” Research & Practice in Assessment, vol. 8, pp. 13–25, 2013.

   G. Christensen, A. Steinmetz, B. Alcorn et al., “The mooc phenomenon: who takes massive open online courses and why?” Social Science Research Network, 2013.

   K. Madhavan and E. Lindsay, “Use of information technology in engineering education,” in Cambridge Handbook of Engineering Education Research. Cambridge University Press, 2014, pp. 633–654.

   T. Alkhaldi, I. Pranata, and R. Athauda, “A review of contemporary virtual and remote laboratory implementations: observations and findings,” Journal of Computers in Education, vol. 3, pp. 329–351, 2016.

   C. Arguedas-Matarrita, S. Concari, J. García-Zubía et al., “A teacher training workshop to promote the use of the visir remote laboratory for electrical circuits teaching,” in 2017 4th Experiment@International Conference, 2017, pp. 1–6.

   J. Grodotzki, T. Ortelt, and A. Tekkaya, “Remote and virtual labs for engineering education 4.0: achievements of the elli project at the tu dortmund university,” Procedia Manufacturing, vol. 26, pp. 1349–1360, 2018.

   E. Faulconer, J. Griffith, B. Wood et al., “A comparison of online and traditional chemistry lecture and lab,” Chemistry Education Research and Practice, vol. 19, pp. 392–397, 2018.

   I. Tejado, I. González, E. Pérez et al., “Introducing systems theory with virtual laboratories at the university of extremadura: How to improve learning in the lab in engineering degrees,” The International Journal of Electrical Engineering & Education, 2019, dOI: 10.1080/0020720919876815.

   G. Diaz, F. Loro, M. Castro et al., “Remote electronics lab within a mooc: Design and preliminary results,” in 2013 2nd Experiment@ International Conference, 2013, pp. 89–93.

   V. Potkonjak, M. Gardner, V. Callaghan et al., “Virtual laboratories for education in science, technology, and engineering: A review,” Computers & Education, vol. 95, pp. 309–327, 2016.

   J. Brinson, “Learning outcome achievement in non-traditional (virtual and remote) versus traditional (hands-on) laboratories: A review of the empirical research,” Computers & Education, vol. 87, pp. 218–237, 2015.

   M. Hernández-de Menéndez, A. Vallejo Guevara, and R. Morales-Menendez, “Virtual reality laboratories: a review of experiences,” International Journal on Interactive Design and Manufacturing (IJIDeM), vol. 13, pp. 947–966, 2019.

   S. Nikolic, M. Ros, K. Jovanovic et al., “Remote, simulation or traditional engineering teaching laboratory: a systematic literature review of assessment implementations to measure student achievement or learning,” European Journal of Engineering Education, vol. 46, pp. 1141–1162, 2021.

   S. Aragon, “Creating social presence in online environments,” New directions for adult and continuing education, vol. 2003, pp. 57–68, 2003.

   P. Lowenthal and J. Dunlap, “Social presence and online discussions: A mixed method investigation,” Distance Education, vol. 41, pp. 490–514, 2020.

   S. Bali and M. Liu, “Students’ perceptions toward online learning and face-to-face learning courses,” Journal of Physics: Conference Series, vol. 1108, 2018, dOI: 10.1088/1742-6596/1108/1/012094.

   R. Kaufmann and J. Vallade, “Exploring connections in the online learning environment: student perceptions of rapport, climate, and loneliness,” Interactive Learning Environments, 2020, dOI: 10.1080/10494820.2020.1749670.

   R. Li, J. Morelock, and D. May, “A comparative study of an online lab using labsland and zoom during covid-19,” Advances in Engineering Education COVID-19 Special Issue, 2020.

   A. Kaya and B. Dönmez, “An evaluation of the classroom management approaches of the class teachers implementing “constructivist learning approach”,” Procedia-Social and Behavioral Sciences, vol. 1, pp. 575–583, 2009.

   D. May, B. Reeves, M. Trudgen et al., “The remote laboratory visir-introducing online laboratory equipment in electrical engineering classes,” in 2020 IEEE Frontiers in Education Conference (FIE). IEEE, 2020, pp. 1–9.

   F. Loro, P. Losada, R. Gil et al., “Real experiments in a mooc through remote lab visir: Challenges, successes and limits,” in 2018 Learning With MOOCS (LWMOOCS). IEEE, 2018, pp. 98–101.

   M. Marques, M. Viegas, M. Costa-Lobo et al., “How remote labs impact on course outcomes: Various practices using visir,” IEEE Transactions on Education, vol. 57, pp. 151–159, 2014.

The post An Investigation into Peer-Assisted Learning in an Online Lab Environment appeared first on ASEE Computers in Education Journal.

]]>
Modeling COVID-19 disruptions via network mapping of the Common Core Mathematics Standards https://coed-journal.org/2023/06/30/modeling-covid-19-disruptions-via-network-mapping-of-the-common-core-mathematics-standards/ Fri, 30 Jun 2023 23:59:32 +0000 https://coed-journal.org/?p=4363 A unique method for promoting reflection among engineering students was used in the present study involving a digital circuits course. The method combined computer-based simulation for digital circuit design with reflective-thought prompts after a midterm exam for post-exam analysis and reflection. This method was first implemented in a microelectronics course using the SPICE simulator. Lessons learned from the initial implementation were applied to the digital circuits course. These lessons learned included the need to scaffold students in the use of the simulation tool for reflection, the need to balance frequency of reflection with student workload and fatigue, and question prompts that voluntarily elicit broad thought after a milestone event such as a midterm exam (versus a quiz). Using a published depth rubric, the assessment results found increased depth of reflection in the present course relative to the initial implementation in microelectronics. Specifically, there were increases in depth of reflection after the midterm exam in the present course versus the midterm exam and two quizzes in the microelectronics course. The increases in depth were significant relative to the quizzes. There was also an increase in the relative occurrence of broad reflections in the present course, with significant differences compared to the quizzes. Although significant differences were not found in the final exam averages based on depth of reflection after the midterm exam or participation in this reflection, results from a follow-up survey several months after the course ended indicated benefit for students. Specifically, 80% of those who competed the reflection exercise indicated a high or very high perceived benefit from doing so. Of the approximately 50% who chose not to complete the reflection exercise, the primary reasons were identified via the follow-up survey. Findings from this work align with and add to the developing literature on student reactions to reflection.

The post Modeling COVID-19 disruptions via network mapping of the Common Core Mathematics Standards appeared first on ASEE Computers in Education Journal.

]]>

1 Introduction

In the spring of 2020, millions of students abruptly shifted to online instruction, and in some cases, no instruction, as COVID-19 disrupted schools nationwide. But this disruption is not simply localized to a single semester: consider, for example, the downstream effects on a fifth grader, who needs to master adding fractions in order to perform more complicated operations in sixth and later grades. Failing to master an earlier, more fundamental learning outcome will result in difficulty mastering a learning outcome in a later grade that depends on the earlier outcome. It is critical to analyze such outcome dependencies in order to address learning gaps so that deficiencies are not propagated for years to come. To study these direct and indirect COVID-19 disruptions, this paper develops a graph-based, data-driven model of learning outcomes in a mathematics curriculum.

For our analysis to be widely applicable, we will consider the Common Core Mathematics curriculum. The Common Core Mathematics curriculum is a table list of 331 learning outcomes, dubbed “Standards”, for what students should be able to achieve in each grade band. The Common Core is standardized and adopted across 43 states in public school systems (National Governors Association Center for Best Practices Council of Chief State School Officers2010). It therefore facilitates a useful analysis that is widely applicable to all school systems who adopt the Common Core.

One major difficulty in analyzing chains of learning outcome dependencies is that of scale: if one is considering a single learning outcome and wishes to identify all downstream learning outcomes it may impact, including in later grades, it may be possible to trace and list all such downstream outcomes manually with some effort. However, such a process poses several issues. Firstly, it is difficult to replicate with the same result. Secondly, it is a manual and laborious process, with significant chance of oversight error. Thirdly, it does not allow for advanced analysis; for instance, manual lists make it difficult to denote a strong versus weak dependency and carry that forward in analysis. With these issues arising in analyzing a single outcome, how is it possible to analyze an entire curriculum of hundreds of learning outcomes?

The literature establishes the usefulness of mapping learning outcomes in a structured form and provides clues as to which structured form to use. Because we wish to analyze relationships, it is especially useful to look at network models, alternatively also referred to graph models. Courses have been linked in a curriculum through their learning outcomes in a graph-based model (Auvinen2011Miller et al.2016Seering et al.2015). Learning maps comprised of linked learning outcomes and activities have been created for adaptive learning (Bargel et al.2012Battou et al.2011Collins et al.2005Essa2016). Ontologies have also been created, visually linking topics, learning resources and other curriculum data in a diagram-like presentation (Bardet et al.2008Yudelson et al.2015). More recently, Willcox and Huang (2017) introduced a network modeling framework for mapping educational data to leverage the unique relationship-first properties of graphs. Additional work referencing this network modeling approach includes graph-based visualization tools (Chen and Xue2018Ghannam and Ansari2020Samaranayake2019), curriculum development and design tools (Kaya2019), and adaptive learning tools (Cavanagh et al.2019). We build upon this body of work by modeling the Common Core Mathematics Standards as a network model. To date, there has been limited research in structuring the Common Core in a network form. We emphasize the fact the Common Core Standards are presented as a list, devoid of any relationships. This is an acknowledged limitation since Standards are interrelated, and presenting them as a list loses important relationships (Daro et al.2012Zimba).  Zimba presents the Common Core in a visual diagram with connections amongst Standards. However, as it only presents a visual diagram without an underlying network model, it is of limited analytic use. We go further by developing a structured, data-driven network model and using it to generate replicable analyses and visualizations. We chunk Standards into finer-grained statements of skills mastery, dubbed “Micro-Standards”, and we draw prerequisite connections between Micro-Standards. In doing so, we rely on an established body of work in using experts to identify prerequisites within a hierarchy of skills (Cotton et al.1977White1974Gagne and Paradise1961Liang et al.2017Wang et al.2016). By drawing prerequisite linkages between Micro-Standards (finer-grained skills) rather than just Standards (coarser-grained skills), we enable greater precision in relationships between statements of skills mastery (Popham2006Pardos et al.2006Huang and Willcox2021). This higher level of granularity is a crucial requirement in many use cases (McCalla and Greer1994Greer and McCalla1989Hobbs1985), such as curating reusable repositories of learning content 1, designing just-in-time interventions to address micro-sized learning targets (Gagne et al.2019), intelligent tutoring systems that serve adaptive assessments to students (Huang and Willcox2021), etc.

In this paper, we develop a network model for the Common Core Mathematics curriculum and use it to analyze COVID-19 disruptions. The next section presents the theoretical network model. We then illustrate mapping the Common Core curriculum into a network structure, including the process of discretizing Common Core Standards into Micro-Standards and creating prerequisite linkages. With the resulting network map, we identify vertices and pathways of interest. We then model the Spring 2020 COVID-19 school closures as a shock to the system, with specific Micro-Standards initially impacted. Using graph analysis, we trace the propagating effects of the initial shock to later grades. Our analysis shows far-reaching consequences of COVID-19 disruptions and reveals learning pathways of interest. Finally, we discuss the analytic and predictive power obtained by our Common Core network model versus that of the classic Common Core Standard list.

2 The Network Model

A network model is a set of entities and relationships arranged in a graph structure in which entities are represented as vertices, or nodes, and relationships are represented as edges between vertices. Examples of entities include: educational institutions, departments, subjects, learning modules, topics, learning outcomes, etc. Examples of relationships include: prerequisite links between any two learning outcomes, parent-child relationships that denote categorical groupings, etc.

In the network model developed in this paper, we define the notion of a Micro-Standard entity. Readers familiar with the Common Core will know that the Common Core defines “Standards”, medium-grained statements of skills mastery. Our defined Micro-Standards are more fine-grained statements, derived from dividing up a Standard. For instance, Figure 1 shows a Standard that has been divided up into three Micro-Standards, resulting in highly specific statements of skills mastery.

PIC

Figure 1. Example of a Common Core Standard split into 3 Micro-Standards for the purpose of defining the network model.

We then define a has-prerequisite-of relationship that points from one Micro-Standard to the next Micro-Standard. This relationship represents the notion that mastering one Micro-Standard is necessary in order to master the next Micro-Standard. Prerequisite relationships between Standards are implied in the Common Core Standards. For instance, in order to add, “Subtract and multiple complex numbers,” it is naturally obvious that a learner must first be able to define what a complex number is. By defining these has-prerequisite-of relationships, we make relationships explicit and designate them as first-class objects in the network model. As discussed in Cotton et al. (1977); Collins et al. (2005), the identification of prerequisites between entities is sensitive to the granularity of the entities — the coarser the statement of learning, the more dimensions for interpretation there are as to what constitutes a prerequisite. By drawing has-prerequisite-relationships between Micro-Standards, we inject more granularity and precision into the model because we can narrow in exactly on why a prerequisite linkage is justified.

We define the remaining entities in our model: Cluster, Domain and Grade Level / Band. These entities correspond to how Standards are grouped in the Common Core: a Cluster is a grouping of Micro-Standards, a Domain is a grouping of Clusters, and a Grade Band is a grouping of Domains. To model such a notion of grouping, we further define a has-parent-of relationship pointing from the child entity to the parent group entity. Figure 2 shows a schematic of the resulting network model.

PIC

Figure 2. Schematic of our Common Core network model showing the types of entities and relationships.

We briefly introduce several basic concepts of graph theory that we will use to analyze the Common Core curriculum network. The in-degree of a vertex is the number of incoming edges; the out-degree of a vertex is the number of outgoing edges. The Common Core network model belongs to a special class of graphs called directed acyclic graphs (DAG) in which there are no cycles in the graph. For DAGs, one can compute a topological sort of the vertices such that there is no edge going from any vertex in the sorted sequence to an earlier vertex in the sequence. Within the topological sort, we can rank vertices such that the rank(v) of a vertex v is the longest path from some source vertex u to v.

3 Mapping the Common Core

The Common Core Mathematics Area comprises 331 Standards across ten grade bands from Kindergarten through High School. Standards are medium-grained statements of skills mastery. From Kindergarten through Grade 8, Standards are grouped into Domains. In the High School grade band, Standards are grouped under Clusters, and Clusters are further grouped by Domains. As an example, Table 1 illustrates a set of Standards in the “Vector & Matrix Quantities” Cluster, further nested under the “Number & Quantity” Domain in the High School grade band.

Table 1. An example showing two Clusters of Standards in a single domain in the High School grade band.

Domain: Vector and Matrix Quantities

 

Cluster: Represent and model with vector quantities

 

A.1 Recognize vector quantities as having both magnitude and direction. Represent vector quantities by directed line segments, and use appropriate symbols for vectors and their magnitudes (e.g., v, |v|, ||v||, v).

A.2 Solve problems involving velocity and other quantities that can be represented by vectors.

A.3 Find the components of a vector by subtracting the coordinates of an initial point from the coordinates of a terminal point.

 

Cluster: Perform operations on vectors.

 

B.4.A Add vectors end-to-end, component-wise, and by the parallelogram rule. Understand that the magnitude of a sum of two vectors is typically not the sum of the magnitudes

B.4.A Given two vectors in magnitude and direction form, determine the magnitude and direction of their sum

To create Micro-Standards, we divide a Standard into finer-grained statements of skills mastery. To do this, we determine whether a Standard contains multiple discrete skills. In the interests of preserving fidelity, this determination was largely based on grammatical clues, such as periods, semi-colons separating independent clauses, numbered points, etc. In all cases, we attempted to preserve the original wording of a Standard and did not introduce new meaning when splitting it into discrete statements. For instance, in Figure 1, Standard A.1 has two complete sentences with one independent clause. We split this Standard to create three distinct Micro-Standards with original wording: “Represent vector quantities as having both magnitude and direction” is a distinct skill from being able to “Represent vector quantities by directed line segments,” which is yet distinct from “Use appropriate symbols for vectors and their magnitudes.” The figure illustrates a Standard broken into three Micro-Standards. Dividing up Standards in this way results in finer-grained entities that drive more powerful analytics and precise analysis.

The next step in creating the network model is to draw prerequisite relationships between Micro-Standards. Focusing on one grade band at a time, we review the Micro-Standards within the given grade band. We determine whether a given Micro-Standard is a prerequisite to another Micro-Standard via a top-down decomposition with subject matter experts established in literature (Cotton et al.1977White1974Gagne and Paradise1961). These subject matter experts are active researchers in the field of education and mathematics. We first identify (within a grade band) a candidate set of the most synthesizing skills — that is, the skills that build upon the most prior skill. For each Micro-Standard in the candidate set, we then identify the immediate Micro-outcomes within that grade band that are necessary for learning the synthesizing Micro-Standard. We thus create the prerequisite relationships between the target synthesizing Micro-Standard and the prerequisite Micro-Standards. Next, we take the previously-identified prerequisite Micro-Standards and in turn identify their prerequisites. Note that we draw only direct prerequisite relationships: that is, if Micro-Standard A requires Micro-Standard B, and Micro-Standard B requires C, we draw a relationship between A and B, and a relationship between B and C, but we do not draw a relationship between A and C. This level by level decomposition is a breadth-first traversal and gives us a tentative version of the partial dependency tree. Because this initial version was formed by one subject matter expert, we check the reasonableness of the dependencies by polling at least two other subject matter experts. Any revisions are agreed upon in consensus. In this way, we progress through all the grade bands, constructing the intra-grade prerequisite relationships.

After the intra-grade prerequisite relationships are constructed, we step through the grades again to draw inter-grade prerequisite relationships. Starting from the most downstream grade band (i.e., the High School grade band), we identify the most fundamental Micro-Standards in a given Cluster or Domain, i.e., the Micro-Standards that do not have any intra-grade prerequisites. We then identify any prerequisites in the previous grade band; if none can be found in the immediate preceding grade band, we step back to the next preceding grade band and begin the search again. After every grade band iteration, we again check for consensus amongst experts in the updated linkages. In this way, we step through all the grade bands and construct inter-grade prerequisite relationships.

PIC

Figure 3. Zoomed-in section showing two Domains (Statistics and Probability, and Number and Quantity), several of their Clusters (Quantities, Vector and Matrix Quantities, The Complex Number System, etc.), and their Micro-Standards in the High School grade band.

Table 2 shows the total number of mapped entities and relationships for the Common Core. Figure 3 shows a zoomed-in visualization of the resulting network map of Micro-Standards grouped within several Clusters and two Domains in the High School grade band.

Table 2. Properties of the Common Core Mathematics network model.

Entities Relationships
     
Grade Band 10 has-parent-of 843
Domain 5 has-prerequisite-of 851
Cluster 65
Micro-Standards 773
     

With the resulting network map, we can analyze the curriculum for Micro-Standards of interest. Table 3 shows some example graph analytics. Across all grade bands, the vertex with the highest in-degree is that of Micro-Standard 4.NBT.1 Recognize that in a multi-digit whole number, a digit in one place represents ten times what it represents in the place to its right. That is, Micro-Standard 4.NBT.1 has the highest number of adjacent follow-on Micro-Standards in our network model of the Common Core. There are five vertices that tie for the highest out-degree (i.e., they are the Micro-Standards that have the highest number of direct pre-requisite Micro-Standards in our network model). Table 3 lists these as Micro-Standards 1.0A.6, 2.0A.2, 3.OA.7, 3.OA.9, and G-CO.4 in grades 1, 2, 3, 3, and High School, respectively. This kind of analysis provides insight into the elements of the curriculum that have the potential for causing or experiencing large disruption.

Table 3. Graph metrics of the Common Core Mathematics network model (names of Micro-Standards are truncated for brevity).

Metric

Micro-Standard

Grade

   
Highest in-degree

4.NBT.1 Recognize that in a multi-digit whole number, a digit in one place represents ten times what it represents in the place to its right.

4

Highest out-degree

1.0A.6 Add and subtract within 20; 2.0A.2 Fluently add and subtract within 20; 3.OA.7 Fluently multiply & divide within 100; 3.OA.9 Identify arithmetic patterns; G-CO.4 Develop definitions of rotations, reflections, and translations

1; 2; 3; 3;
High School

Highest incoming rank

9 (17 vertices)

Highest outgoing rank

9 (6 vertices)

   

Finally, we conduct a topological sort of the entire Common Core Mathematics curriculum to look at learning pathways of interest. Of particular interest are learning pathways that are especially long, since these pathways may be highly vulnerable to disruption. These pathways can be found by tracing the vertices with the highest rank in both the incoming and outgoing directions. A total of 17 vertices tie for the highest outgoing rank of nine. For example, G-GPE.3 Derive the equation of an ellipse given the foci in High School has a prerequisite path length of nine; Figure 4 visualizes this path. Note that in our visualization, arrows point from a more fundamental Micro-Standard to a downstream one, since it is more intuitive to visualize learning flow in this direction. This is in contrast to the underlying mathematical model depicted in Figure 2, where the directed has-prerequisite-of edge in the graph points from the downstream Micro-Standard to its prerequisite. Six vertices tie for the highest incoming rank of nine. For example, 2.MD.6 Represent whole numbers as lengths on a number line in Grade 2 leads to a downstream path of length nine, across four grade bands. This branching pathway is visualized in Figure 5.

PIC

Figure 4. Visualization of one of the longest paths of the network: the entire prerequisite chain of G-GPE.3 Derive equation of ellipse given foci.

PIC

Figure 5. Visualization of one of the longest paths of the network: the downstream chain of 2.MD.6 Represent whole numbers as lengths on a number line.

4 Example Application: COVID Disruption in Massachusetts

The resulting network map represents a structured view of how learners move through the Common Core Mathematics curriculum. With this network model, we can follow learning paths, assign probabilities or weights to the edges between vertices, and replicate our analyses. As one application example, we analyze the disruptions caused by school closures on March 15, 2020 in Massachusetts. From March 15 to the end of the school year, schools were either entirely closed or had adopted online learning in Massachusetts. In our example analysis, we consider any Micro-Standard scheduled to be taught during this time to have been disrupted.

For every Micro-Standard that was directly impacted during this time, we assign the vertex a boolean attribute of directly_impacted = true and color that vertex red for visual illustration. For each Micro-Standard that was directly impacted, we follow incident incoming edges of type has-prerequisite-of to arrive at other vertices of type Micro-Standard that depend on the impacted Micro-Standard. Formally, we conduct a breadth-first search to discover the Micro-Standards in order of ascending immediacy: the immediate neighbors of the initial vertex are the next Micro-Standards to be disrupted; the neighbors of these next Micro-Standards are further next in line, and so forth. We assign these downstream vertices a boolean attribute of indirectly_impacted=true and color them yellow. We note that our modeling approach is not limited to boolean attributes as used here; vertices can be attached different types of values such as continuous probability values, categorical values, discrete values, etc.

PIC

Figure 6. Pathway 1: The directly-impacted Micro-Standard is red; downstream impacted Micro-Standards are highlighted in yellow.

PIC

Figure 7. Pathway 2: The directly-impacted Micro-Standard is red; downstream impacted Micro-Standards are highlighted in yellow.

In one analysis, we analyze the downstream impact to sixth graders. Using the sixth grade syllabus of Cambridge Public Schools (Cambridge Public Schools2015), we estimated there was a total of 27 Micro-Standards scheduled to be taught during the period of school closures. To show some examples of pathway analyses: Figure 6 illustrates a path of a single directly-impacted Micro-Standard, 6.G.4, colored red, located at the top of the figure. This Micro-Standard leads to 6.G.4, another directly-impacted Micro-Standard, which leads to 7.G.6, a downstream-impacted Micro-Standard in the seventh grade. In this simple example, we observe how one directly-impacted Micro-Standard in the sixth grade leads to a downstream disruption of one Micro-Standard in the seventh grade.

In another more complex example: Figure 7 traces the downstream path of a single directly-impacted Micro-Standard, 6.NS.8, colored in red, located at the top of the figure. 6.NS.8 has three immediate downstream Micro-Standards: 6.NS.8, 6.G.8, and 7.G.4. While both 6.NS.8 and 6.G.3 were scheduled to be taught during school closures and are thus directly impacted, 7.G.4 was not scheduled to be taught during that time. 7.G.4 is in fact a Micro-Standard taught in the seventh grade. 7.G.4 leads to another Micro-Standard in the seventh grade, 7.G.6, which in turn leads to an eighth grade Micro-Standard 8.G.8. 8.G.8 has five immediate downstream Micro-Standards: G-GPE.1, G-GPE.3, G-GPE.3, G-GPE.2 and G-GPE.7. These five Micro-Standards are all located in the High School grade band and they lead to even more downstream Micro-Standards. In this example, we observe that a single Micro-Standard impacted 17 downstream Micro-Standards spanning three grade bands. Our sixth grade analysis showed that from an initial 27 Micro-Standards, there resulted a total of 37 downstream impacted Micro-Standards, spanning a total of four grade bands. Note that because the High School grade band is counted as a single grade band, more than four grades are likely to have been impacted. All disrupted outcomes in this example are listed in Table 4.

5 Discussion

In mapping the Common Core Mathematics Standards, our process of chunking Standards and identifying linkages between the resulting Micro-Standards requires some level of subjective input. In chunking the Standards, we attempted to preserve the original wording as closely as possible and used grammatical hints such as periods, independent clauses, etc. to divide up Standards. This process of dividing up Standards not only achieves improved uniformity with respect to grain size across Micro-Standards, but also enables more precise relationships between Micro-Standards to be drawn. Even with a panel of subject matter experts, there is unlikely to be complete agreement on all prerequisite relationships; the results presented here based on our own modeling of the relationships are intended to be illustrative. Even if the modeling approach highlights points of disagreement and/or multiple potential prerequisite paths, this in itself could be a useful outcome. Further revision of linkages between Micro-Standards is an ongoing and future undertaking. We note that because we leveraged network models in which relationships are first-class objects, it is a straightforward task to re-run analyses after entities and relationships are edited.

In drawing relationships between Micro-Standards, we acknowledge that there may be missing or extraneous linkages. This is an issue that will be present for any model representing a complex dataset. However, the power of our graph model approach is such that irrelevant links can be surfaced and discarded, and missing links can be revealed when one layers in student activity data. For example, with the incorporation of student activity into the graph model, we can observe which linkages are indeed relevant or missing, and prune and add as needed. In addition, we have simplified linkages to boolean values — either an edge exists or it does not. It is straightforward to expand the model so that edges admit numerical weights to indicate the strength of the relationship between two Micro-Standards (although assigning these weights will again require subjective expert input). For instance, the numerical strength of a relationship can be a result of a panel vote of experts or even an algorithmically-derived value from application of machine learning. In our particular COVID-19 application case, assigning edge weights will lead to non-boolean determinations for whether downstream Micro-Standards are impacted and is an area of future work.

The mapped network form of the Common Core Mathematics curriculum yields important insights not obtainable with its classic list form. Vertices with high in-degrees are important since they represent Micro-Standards upon which many other Micro-Standards rely. Disruption to achieving high in-degree Micro-Standards will lead to many failures downstream. Vertices with high out-degrees represent the Micro-Standards most sensitive to disruption, as they rely on a great amount of prerequisite mastery. Also of interest are long paths: when Micro-Standards require the learner to retrieve knowledge from a long time ago, there may be greater chance of failure. Long learning paths indicate that additional support may be needed, such as just-in-time interventions. For instance, Essa (2016) proposes an adaptive learning framework with granular learning objects that serve to surface just-in-time actionable insights and feedback. These observations are important for curriculum design under normal circumstances, but become critical in a crisis situation such as COVID-19 when learning is widely disrupted. In this paper, we have chosen a particular grade and state to introduce the initial COVID-19 shock. We emphasize that our data-driven network model enables rapid and scalable analyses under different inputs, such as choosing an earlier grade.

The graph analysis conducted in this paper is illustrative and does not represent the full capability of the network model, nor its significance for curriculum design and adaptive learning applications. There is much scope for further analysis. For instance, graph partition analysis can be useful for discovering and designing parallel tracks of study. A learner model can be superimposed over the base network map to track how individual learners progress through the curriculum. While other studies have visualized the Common Core form with linkages (Zimba), to our knowledge, this is the first study to formally construct a network model of the Common Core and unlock graph-based analysis techniques.

6 Conclusion

We present a data-driven graph-based approach for modeling the Common Core Mathematics curriculum. Our main result is that the network structure makes possible scalable analysis in tracing relationships and effects in learning paths in the Common Core Math Standards. Using COVID-19 school closures in spring 2020 as an initial shock, we trace the propagating effects in the network starting in sixth grade reaching through high school. Because our approach includes first discretizing the Common Core Standards into more fine-grained statements of skills mastery, we are able to identify with a higher level of precision which Micro-Standards will experience disruption. We have not validated our predictions against student assessment data given ongoing COVID-19 conditions, but our main result reveals vulnerable learning pathways to investigate. Validation constitutes an important area for future research. Finally, we note that in the process of validation there must be necessary revisions, and an important advantage of our network modeling approach is that our graph structure enables easy revision of vertices and edges.

Data access

We make the mapped network dataset publicly available via API access at the MIT Mapping Lab (https://mapping.mit.edu).

Table 4: Impacted outcomes starting from the 6th grade
No.

Outcome

Impact Type

Grade
    
1.

[6.EE.2c] Evaluate expressions at specific values of their variables.

Directly-impacted

Grade 6
2.

[6.EE.2c] Perform arithmetic operations, including those involving whole number exponents, in the conventional order when there are no parentheses to specify a particular order (Order of Operations).

Directly-impacted

Grade 6
3.

[6.EE.5] Understand solving an equation or inequality as a process of answering a question: which values from a specified set, if any, make the equation or inequality true?

Directly-impacted

Grade 6
4.

[6.EE.5] Use substitution to determine whether a given number in a specified set makes an equation or inequality true.

Directly-impacted

Grade 6
5.

[6.EE.7] Solve real-world and mathematical problems by writing and solving equations of the form x + p = q and px = q for cases in which p, q and x are all nonnegative rational numbers.

Directly-impacted

Grade 6
6.

[6.EE.8] Write an inequality of the form x > c or x < c to represent a constraint or condition in a real-world or mathematical problem.

Directly-impacted

Grade 6
7.

[6.EE.8] Recognize that inequalities of the form x > c or x < c have infinitely many solutions

Directly-impacted

Grade 6
8.

[6.EE.8] Represent solutions of inequalities x > c or x < c on number line diagrams.

Directly-impacted

Grade 6
9.

[6.EE.9] Use variables to represent two quantities in a real-world problem that change in relationship to one another

Directly-impacted

Grade 6
10.

[6.EE.9] Write an equation to express one quantity, thought of as the dependent variable, in terms of the other quantity, thought of as the independent variable.

Directly-impacted

Grade 6
11.

[6.EE.9] Analyze the relationship between the dependent and independent variables using graphs and tables, and relate these to the equation.

Directly-impacted

Grade 6
12.

[6.G.1] Find the area of right triangles, other triangles, special quadrilaterals, and polygons by composing into rectangles or decomposing into triangles and other shapes

Directly-impacted

Grade 6
13.

[6.G.1] Apply techniques that find the area of polygons by composing into rectangles or decomposing into triangles in the context of solving real-world and mathematical problems.

Directly-impacted

Grade 6
14.

[6.G.2] Find the volume of a right rectangular prism with fractional edge lengths by packing it with unit cubes of the appropriate unit fraction edge lengths

Directly-impacted

Grade 6
15.

[6.G.2] Show that the volume of a right rectangular prism with fractional edge lengths is the same as would be found by multiplying the edge lengths of the prism.

Directly-impacted

Grade 6
16.

[6.G.2] Apply the formulas V = l w h and V = b h to find volumes of right rectangular prisms with fractional edge lengths in the context of solving real-world and mathematical problems.

Directly-impacted

Grade 6
17.

[6.G.3] Draw polygons in the coordinate plane given coordinates for the vertices

Directly-impacted

Grade 6
18.

[6.G.3] Use coordinates to find the length of a side joining points with the same first coordinate or the same second coordinate

Directly-impacted

Grade 6
19.

[6.G.3] Apply techniques of drawing on the coordinate plane and finding side lengths in the context of solving real-world and mathematical problems.

Directly-impacted

Grade 6
20.

[6.G.4] Represent three-dimensional figures using nets made up of rectangles and triangles

Directly-impacted

Grade 6
21.

[6.G.4] Use the nets made up of rectangles and triangles to find the surface area of these figures.

Directly-impacted

Grade 6
22.

[6.G.4] Apply techniques using nets made up of rectangles and triangles in the context of solving real-world and mathematical problems.

Directly-impacted

Grade 6
23.

[6.NS.8] Solve real-world and mathematical problems by graphing points in all four quadrants of the coordinate plane.

Directly-impacted

Grade 6
24.

[6.NS.8] Use coordinates and absolute value to find distances between points with the same first coordinate or the same second coordinate.

Directly-impacted

Grade 6
25.

[6.SP.1] Recognize a statistical question as one that anticipates variability in the data related to the question and accounts for it in the answers.

Directly-impacted

Grade 6
26.

[6.SP.2] Understand that a set of data collected to answer a statistical question has a distribution which can be described by its center, spread, and overall shape.

Directly-impacted

Grade 6
27.

[6.SP.3] Recognize that a measure of center for a numerical data set summarizes all of its values with a single number, while a measure of variation describes how its values vary with a single number.

Downstream impacted

Grade 6
28.

[6.SP.4] Display numerical data in plots on a number line, including dot plots, histograms, and box plots.

Directly-impacted

Grade 6
29.

[6.SP.5a] Summarize numerical data sets in relation to their context by reporting the number of observations.

Downstream impacted

Grade 6
30.

[6.SP.5c] Summarize numerical data sets in relation to their context by giving quantitative measures of center (median and/or mean) and variability (interquartile range and/or mean absolute deviation)

Downstream impacted

Grade 6
31.

[6.SP.5c] Summarize numerical data sets by describing any overall pattern and any striking deviations from the overall pattern with reference to the context in which the data were gathered.

Downstream impacted

Grade 6
32.

[6.SP.5d] Summarize numerical data sets in relation to their context by relating the choice of measures of center and variability to the shape of the data distribution and the context in which the data were gathered.

Downstream impacted

Grade 6
33.

[7.G.1] Solve problems involving scale drawings of geometric figures.

Downstream impacted

Grade 7
34.

[7.G.4] Use the formulas for the area and circumference of a circle to solve problems.

Downstream impacted

Grade 7
35.

[7.G.6] Solve real-world and mathematical problems involving area of 2-D objects

Downstream impacted

Grade 7
36.

[7.G.6] Solve real-world and mathematical problems involving volume and surface area of 3-D objects.

Downstream impacted

Grade 7
37.

[7.SP.1] Understand that statistics can be used to gain information about a population by examining a sample of the population.

Downstream impacted

Grade 7
38.

[7.SP.1] Understand that generalizations about a population from a sample are valid only if the sample is representative of that population.

Downstream impacted

Grade 7
39.

[7.SP.1] Understand that random sampling tends to produce representative samples and support valid inferences.

Downstream impacted

Grade 7
40.

[7.SP.2] Use data from a random sample to draw inferences about a population with an unknown characteristic of interest.

Downstream impacted

Grade 7
41.

[7.SP.2] Generate multiple samples (or simulated samples) of the same size to gauge the variation in estimates or predictions.

Downstream impacted

Grade 7
42.

[7.SP.3] Informally assess the degree of visual overlap of two numerical data distributions with similar variabilities, measuring the difference between the centers by expressing it as a multiple of a measure of variability.

Downstream impacted

Grade 7
43.

[7.SP.4] Use measures of center and measures of variability for numerical data from random samples to draw informal comparative inferences about two populations.

Downstream impacted

Grade 7
44.

[8.G.6] Explain a proof of the Pythagorean Theorem and its converse.

Downstream impacted

Grade 8
45.

[8.G.7] Apply the Pythagorean Theorem to determine unknown side lengths in right triangles

Downstream impacted

Grade 8
46.

[8.G.8] Apply the Pythagorean Theorem to find the distance between two points in a coordinate system.

Downstream impacted

Grade 8
47.

[G-C.2] Identify and describe relationships among inscribed angles, radii, and chords

Downstream impacted

High School
48.

[G-C.3] Construct the inscribed and circumscribed circles of a triangle

Downstream impacted

High School
49.

[G-C.3] Prove properties of angles for a quadrilateral inscribed in a circle

Downstream impacted

High School
50.

[G-C.4] Construct a tangent line from a point outside a given circle to the circle.

Downstream impacted

High School
51.

[G-GPE.1] Derive the equation of a circle of given center and radius

Downstream impacted

High School
52.

[G-GPE.1] Complete the square to find the center and radius of a circle given by an equation.

Downstream impacted

High School
53.

[G-GPE.2] Derive the equation of a parabola given a focus and directrix.

Downstream impacted

High School
54.

[G-GPE.3] Derive the equation of hyperbola given the foci.

Downstream impacted

High School
55.

[G-GPE.3] Derive the equation of ellipse given the foci.

Downstream impacted

High School
56.

[G-GPE.4] Use coordinates to prove simple geometric theorems algebraically.

Downstream impacted

High School
57.

[G-GPE.5] Prove the slope criteria for parallel and perpendicular lines

Downstream impacted

High School
58.

[G-GPE.5] Use the slope criteria for parallel and perpendicular lines to solve geometric problems.

Downstream impacted

High School
59.

[G-GPE.6] Find the point on a directed line segment between two given points that partitions the segment in a given ratio.

Downstream impacted

High School
60.

[G-GPE.7] Use coordinates to compute perimeters of polygons and areas of triangles and rectangles

Downstream impacted

High School
61.

[G-SRT.9] Derive the formula for the area of a triangle by drawing an auxiliary line from a vertex perpendicular to the opposite side.

Downstream impacted

High School
62.

[S-ID.1] Represent data with plots on the real number line (dot plots, histograms, and box plots).

Downstream impacted

High School
63.

[S-ID.2] Use statistics appropriate to the shape of the data distribution to compare center and spread of two or more different data sets.

Downstream impacted

High School
64.

[S-ID.3] Interpret differences in shape, center, and spread in the context of the data sets, accounting for possible effects of extreme data points (outliers).

Downstream impacted

High School

References

   National Governors Association Center for Best Practices Council of Chief State School Officers, “Common core state standards: Mathematics,” 2010.

   T. Auvinen, “Curriculum development using graphs of learning outcomes,” in First EUCEET Association Conference: “New Trends and Challenges in Civil Engineering Education”, Patras, 2011.

   H. Miller, K. Willcox, and L. Huang, “Crosslinks: Improving course connectivity using online open educational resources,” The Bridge, vol. 46, no. 3, pp. 38–44, 2016.

   J. Seering, L. Huang, and K. Willcox, “Mapping outcomes in an undergraduate aerospace engineering program,” in Proceedings of the American Society for Engineering Education 12th Annual Conference & Exposition, Seattle, WA, June 2015.

   B. Bargel, J. Schrock, D. Szentes, and W. Roller, “Using learning maps for visualization of adaptive learning path components,” International Journal of Computer Information Systems and Industrial Management Applications, vol. 4, pp. 228–235, 2012.

   A. Battou, C. Mezouary, C. Cherkaoui, and D. Mammass, “Towards an adaptive learning system architecture based on a granular learning object framework,” International Journal of Computer Applications, vol. 32, no. 5, pp. 8–14, 2011.

   J. Collins, J. Greer, and S. Huang, “Adaptive assessment using granularity hierarchies and Bayesian nets,” in International Conference on Intelligent Tutoring Systems, 2005, pp. 569–577.

   A. Essa, “A possible future for next generation adaptive learning systems.” Smart Learning Environments, vol. 3, no. 16, 2016.

   J. Bardet, I. Yen, D. McLeod, G. Ragusa, and N. Mokarram, “Ontologies and web semantics for improvement of curriculum in civil engineering,” in Proceedings of the 2008 American Society for Engineering Education Annual Conference and Exposition, June 2008.

   M. Yudelson, I. Yen, E. Panteleev, and L. Khan, “A framework for an intelligent on-line education system,” in Proceedings of the 2003 American Society for Engineering Education Annual Conference and Exposition, June 2015.

   K. Willcox and L. Huang, “Network models for mapping educational data,” Design Science Journal, vol. 3, no. e18, 2017.

   X. Chen and C. Xue, “Network visual exploration for the cooperation map of courses in different major curricula,” Educational Sciences: Theory and Practice, vol. 18, no. 6, 2018.

   R. Ghannam and I. Ansari, “Interactive tree map for visualising transnational engineering curricula,” in 2020 Transnational Engineering Education using Technology (TREET), 2020.

   S. Samaranayake, “Dependency evaluation and visualization tool for systems represented by a directed acyclic graph,” International Journal of Advanced Computer Science and Applications, vol. 11, no. 7, 2019.

   I. Kaya, “Artificial neural networks as a decision support tool in curriculum development,” International Journal on Artificial Intelligence Tools, vol. 28, no. 4, 2019.

   T. Cavanagh, B. Chen, R. Lahcen, and J. Paradiso, “Constructing a design framework and pedagogical approach for adaptive learning in higher education: A practitioner’s perspective,” International Review of Research in Open and Distance Learning, vol. 21, no. 1, pp. 172–196, 2019.

   P. Daro, B. McCallum, and J. Zimba, “The structure is the standards,” http://commoncoretools.me/

The post Modeling COVID-19 disruptions via network mapping of the Common Core Mathematics Standards appeared first on ASEE Computers in Education Journal.

]]>
Integrating Computer Science across Wyoming’s K-12 Curriculum from Inception to Implementation: Analysis Using Systems Theory https://coed-journal.org/2023/06/15/integrating-computer-science-across-wyomings-k-12-curriculum-from-inception-to-implementation-analysis-using-systems-theory/ Thu, 15 Jun 2023 14:48:21 +0000 https://coed-journal.org/?p=4332 A unique method for promoting reflection among engineering students was used in the present study involving a digital circuits course. The method combined computer-based simulation for digital circuit design with reflective-thought prompts after a midterm exam for post-exam analysis and reflection. This method was first implemented in a microelectronics course using the SPICE simulator. Lessons learned from the initial implementation were applied to the digital circuits course. These lessons learned included the need to scaffold students in the use of the simulation tool for reflection, the need to balance frequency of reflection with student workload and fatigue, and question prompts that voluntarily elicit broad thought after a milestone event such as a midterm exam (versus a quiz). Using a published depth rubric, the assessment results found increased depth of reflection in the present course relative to the initial implementation in microelectronics. Specifically, there were increases in depth of reflection after the midterm exam in the present course versus the midterm exam and two quizzes in the microelectronics course. The increases in depth were significant relative to the quizzes. There was also an increase in the relative occurrence of broad reflections in the present course, with significant differences compared to the quizzes. Although significant differences were not found in the final exam averages based on depth of reflection after the midterm exam or participation in this reflection, results from a follow-up survey several months after the course ended indicated benefit for students. Specifically, 80% of those who competed the reflection exercise indicated a high or very high perceived benefit from doing so. Of the approximately 50% who chose not to complete the reflection exercise, the primary reasons were identified via the follow-up survey. Findings from this work align with and add to the developing literature on student reactions to reflection.

The post Integrating Computer Science across Wyoming’s K-12 Curriculum from Inception to Implementation: Analysis Using Systems Theory appeared first on ASEE Computers in Education Journal.

]]>

Integrating K-12 Computer Science in Wyoming

Wyoming K-12 students enjoy constitutional protection for a “fair, complete, and equal education, appropriate for the times” (1 , p.1). The Wyoming Supreme Court declared that education must be delivered consistently throughout all Wyoming school districts, an entitlement that extends to every Wyoming K-12 student. The educational offerings, often referred to as deliverables or a “basket of goods” 1 (p.11), are the right of every Wyoming child. While not unique, Wyoming is unusual in this approach to education, as most school districts rely on local funding models based on property tax. This more common model creates inequity among school districts based on local wealth. Wyoming statutorily protects all students against this kind of inequality 1.

The Wyoming Department of Education (WDE) guides the content and changes to the basket of goods. Beginning in the 2022–2023 school year, universal K-12 computer science education was added to the deliverables. The initiation of the addition of universal K-12 computer science education began in 2015 with an executive action by then-Governor Matt Mead. The final executive and legislative work are complete, and implementation is underway to make universal K-12 computer science education a reality in Wyoming 2.

While many Wyoming school districts have delivered computer science education, this was not true in all schools and was dependent on district resources. One limiting factor is the presence within the individual districts of qualified teachers. For universal K-12 computer science education to become a reality, every district and most individual schools need access to teachers who have earned a K-12 endorsement in computer science (CS). The process for certifying teachers in CS has begun.

Wyoming’s higher education system comprises seven community colleges and the University of Wyoming. Northwest College (NWC) in Powell is one of these community colleges. In the summer of 2020, NWC launched a K-12 CS Endorsement Skills Certificate to qualify in-service teachers in CS to meet Wyoming’s evolving vision for education. Thirteen K-12 teachers completed this program in the spring of 2021 and are now certified to teach CS in Wyoming’s K-12 schools. The program continues, with additional graduates each year.

This vision for universal K-12 CS education is informed by the Computer Science Teachers’ Association K-12 standards 3 and explicated in the Wyoming Computer Science Content Standards:

Every student in every school has the opportunity to learn computer science. We believe that computing is fundamental to understanding and participating in an increasingly technological society, and it is essential for every Wyoming student to learn as part of a modern education. We see computer science as a subject that provides students with a critical lens for interpreting the world around them and challenges them to explore how computing and technology can expand Wyoming’s impact on the world.

2

The ideals of fair and equal education in general and the universal availability of CS education and its implementation at NWC specifically form the basis of this objective: to help explain Wyoming’s process of preparing teachers to provide computer science and computational thinking in their classrooms through the lens of systems theory. This process involves developing a chronology of Wyoming’s educational initiative in CS from inception to implementation and recognizing the work of the teachers in the first cohort of NWC’s computer science endorsement program. To that end, the following research question is considered: How does systems theory provide a model for understanding Wyoming’s universal K-12 CS education delivery?

Literature Review

Archival Research

This study relies on archival research involving primary sources held in repositories. Archival sources include physical records, electronic records, and other materials 4. Each archival research work is unique because each archive has different resources, access, and constraints. Although generalizing archival work is not uniformly studied, this research paper is based on the construction of a chronology, similar to the methodology used by historians (Hammond, 2002).

This study depends heavily on official documents from primary sources: policy, legislation, recorded minutes of meetings, and in-person interviews. Secondary sources, such as newspapers or other media reports, are not used. The use of sequential, official documents from trustworthy sources lends itself to a systematic and incremental study with minimal ideological bias (L’eplattenier, 2009).

Classical Systems Theory

System theory asserts that the design of a system predicts the outcome, and if the outcome is undesirable, the individual system components can be modified to alter the output. When systems theory is applied to engineering systems, meaning is created from the interactions of many different components 7 .

Adams et al. Adams, Hester, Bradley, Meyers, and Keating (2014) outlined the history of systems thought by summarizing definitions provided by the four foundational researchers. Von Bertalanffy 9 focused on the formal correspondence of general principles, regardless of the relationships between system components, whileBoulding (1956) viewed general systems theory as a framework or structure on which to “hang flesh and blood” to develop an orderly and coherent body of knowledge relevant to different disciplines. Building on these foundational researchers,Klir (1972) conceptualized systems theory as a way to view phenomena as interrelated rather than isolated, which provides a way to study the complexity of a system, andGigch (1974) focused on the relationships between systems and subsystems. He studied the ideas of optimization and suboptimization and the sometimes “fruitless” efforts expended in the pursuit of summum bonum, or the “ultimate good.” From classical systems analysis, the entirety of the universe is divided into two elements: a system and its surroundings (Cengel & Boles, 2007).

Regardless of the field of application, systems can be designed for summum bonum but often fall short of this goal. Keating et al.Keating, Bradley, Katina, and Arndt (2018) postulated that suboptimization may be a more appropriate goal in terms of the expenditure of resources.

In summary, systems theory is used to analyze complex organizations, considering parts of the system as they relate to each other as the system aims to achieve desired outcomes. The outcome of the system is viewed as the end-product of the components; if desired outcomes are not achieved, the components of the system are examined and changed until the desired outcomes are achieved. From a practical standpoint, a goal of suboptimization may be more efficient a goal than optimization due to the expenditure of less fruitless effort.

Systems Theory Applied to Social Sciences

The problems facing the modern world are complex and systemic by nature and cannot be understood in isolation (Hammond, 2002). Interconnectedness and interdependence must be considered in the analysis of modern social systems, and studying human organizations using systems theory is timely and important.Hammond (2002) emphasized the importance of the role of dialog in the decision-making process of social systems. Additionally, she developed the possibility of using systems theory in a less technical, more humanistic way when applied to human systems instead of engineering systems, broadening the applicability of systems theory to a wide array of social problems.

There are differing views of how systems theory applies to social systems (Adams et al., 2014), but central to the research is the concept of communication and interconnectedness as key components of any social system (Grothe-Hammer, 2020; Kneupper, 1980; Luhmann, 1995) with self-sustainability as a goal.

Modern systems theory applied to social systems is based upon a social constructivist understanding of social reality, where meaning is constructed through communication (Kneupper, 1980).Grothe-Hammer (2020) postulated that changes in social systems occur through communication, andLuhmann (1995) saw communication media as the “universal key,” or a “super methodology” to explain systems process. The literature is clear on the importance of communication within a social system; some authors go so far as to say that the system is created by communication alone (Grothe-Hammer, 2020).

Another key element of systems theory is that social systems are viewed as holistic, meaning that the individual components of the system are interrelated and can only be analyzed with respect to other components 18. A holistic system can become capable of decision-making and self-management. This concept is called autopoiesis, which means self-production (, 1981).Lewis (2021) tells us that any system requiring “human vigilance” will degrade over time; when autopoiesis is achieved, constant intervention is no longer necessary for the system to continue functioning, as the system becomes self-sustaining. This is a working definition of autopoiesis.

The extent to which autopoiesis can be applied to social systems is debated.Cadenas and Arnold (2015) argued that autopoiesis is a fundamental concept of constructivist epistemology and applies to social systems if the definition is focused to mean self-sustaining. Similarly, a systems-thinking framework reflects upon the end-user’s experiences (Burrows, Borowczak, & Mugayitoglu, 2021; Chen & Venkatesh, 2013; Sweeney & Meadows, 2010).

Adams et al. (2014) proposed a definition of systems theory as applied to social systems. He articulated this definition by enumerating the following set of axioms to form a system construct, shown below in Table 1 .

Table 1: Axioms for Systems Theory Construct

Axiom

Description

Centrality

Communication and control create feedback as the dominant system building block

Contextual

System meaning is informed by the circumstances and factors surrounding the system

Goal

Systems achieve specific goals through purposeful behavior using pathways and means

Operational

Removing a system from its environment changes its behavior; systems must be studied where they operate (in situ).

Viability

Key parameters in a system must be controlled to ensure continued existence (autopoiesis)

Design

System design is a purposeful imbalance of resources and relationships

Information

Human systems create, possess, transfer, and modify information

In summary, systems theory can be applied to social systems when the definitions of component and autopoiesis are focused to mean communication and self-sustaining. Under these conditions, a social system can be analyzed using Adams’ et al. 8 axioms to characterize the system.

Literature Gap

This study focuses on the content and application of systems theory to an educational organization. Systems theory has been widely applied to study engineering systems, and more recently has been investigated to study social systems. However, a gap in the literature exists when considering educational systems using systems theory. Banathy and JenlinkBanathy and Jenlink (2013) stated the following:

With very few exceptions, systems philosophy, systems theory, and systems methodology as subjects of study and applications are only recently emerging as topics of consideration in educational professional development programs, and then only in limited scope. Generally, capability in systems inquiry is limited to specialized interest groups in the educational research community. It is our firm belief that unless our educational communities and our educational professional organizations embrace systems inquiry, and unless our research agencies learn to pursue systems inquiry, the notions of “systemic reform” and “systemic approaches to educational renewal” will remain a hollow and meaningless rhetoric.

25 (p. 47)

Based on this literature gap, it is appropriate and important to study educational systems, such as the delivery of universal K-12 education in Wyoming, using systems theory.

Context of the Study

Wyoming’s K-12 Computer Science Initiative

In 2019, then-Governor Matt Mead issued an executive action to include “the use and understanding of computer science” to the educational deliverables guaranteed to each Wyoming K-12 student. This initiative was part of the Wyoming Innovation Network/Wyoming Innovation Partnership (WIN/WIP) and was the first step in the delivery of universal CS education in Wyoming K-12 schools 26, 27 .

Following the executive action, the governor’s office created a task force to determine how to implement the action, and a legislative committee was empaneled to study CS in Wyoming’s K-12 schools. The committee sponsored 18LSO0221 “Education – Computer Science and Computational Thinking” at its November 14, 2018 meeting for sponsorship during the 2019 legislative Session. The bill was passed as SF0029, “Education – Computer Science and Computational Thinking” (Northrup, Dechert, Floyd, & Burrows, 2021), and implemented through the Wyoming Legislature. Table 2 describes the measures enacted by SF0029.

Table 2: Measures Enacted by SF0029

Action

Item

Standards

Added

Computer Science

Common Core of Knowledge

Added

Computational Thinking

Common Core of Skills

Authorized

Use of a CS course

High School graduation requirements and Hathaway Success Curriculum

 In response to SF0029, the WDE developed the Wyoming CS Content Standards 29 . These standards are based on the seven practices enumerated in Table 3 .

Table 3: WDE CS Content Standards

Practice

Description

1

Fostering an inclusive computing culture

2

Collaborating around computing

3

Recognizing and defining computational problems

4

Developing and using abstractions

5

Creating computational artifacts

6

Testing and refining computational artifacts

7

Communicating about computing

Following SF0029, the Professional Teaching Standards Board (PTSB) began the process of certifying K-12 teachers to teach CS in Wyoming schools. One method of certification is for practicing teachers to earn a K-12 CS Endorsement Skills Certificate through a Wyoming institution of higher education, then use this endorsement to apply to the PTSB for certification. Six of Wyoming’s community colleges and the University of Wyoming have created K-12 Computer Science Endorsement Skills Certificates based upon these standards and PTSB requirements. Each program shares eight credits in two common classes (Introduction to Computer Science and Computer Science I), and completion of each program requires 15 to 20 semester credits 30, 31 . The remaining 7 to 12 credits are selected from various courses, including social media, robotics, web design, and additional courses in computer science.

All endorsement programs must be approved through the PTSB. Additionally, all programs offered by Wyoming Community Colleges must be approved by the Wyoming Community College Commission (WCCC).

Current State of NWC’s K-12 CS Endorsement Program

 In response to SF0029, NWC formed a faculty development team to create the curriculum for the K-12 CS Endorsement Skills Certificate. The coursework was selected and refined based on the WDE CS Content and Performance Standards, in conjunction with PTSB requirements. The skills certificate consists of 15 semester credits, distributed among five courses. Two of these courses (Introduction to Computer Science and Computer Science I) were existing catalog courses that address Practices 3, 4, 5, and 6 of the Wyoming Department of Education Computer Science Standards. Robotics was modified from an existing catalog course to address Practices 2 and 3 ( Northwest College, 2018). Two new courses were created: Application Development and Social Media for K-12 Teachers ( Northwest College, 2020). Application Development addresses Practices 5 and 6, whereas Social Media for K-12 Teachers addresses Practices 1, 2, and 7 (Northrup et al., 2021; Wyoming Department of Education, 2021).

The first step in the internal approval process was for the faculty development team to present the K-12 Computer Science Skills Certificate to NWC’s Curriculum Committee. New courses and the modified course required approval as NWC catalog courses, and the program required preliminary approval as a skills certificate. The courses and certificate program were approved at the regular meeting on December 10, 2019 34 .

The next step toward approval was for NWC administration to bring the skills certificate to the Wyoming PTSB for consideration as an endorsement program for certified teachers to teach CS within their grade bands. The program was presented and approved for endorsement in January 2020 31.

The final step was to seek approval from the WCCC for the K-12 CS endorsement program to be approved as a skills certificate. This approval was granted at the regular meeting of the WCCC held on April 16, 2020, and now, the path was cleared to begin offering the skills certificate to in-service teachers who wanted to earn K-12 CS endorsements 35.

The teachers in NWC’s first cohort completed Introduction to Computer Science in the 2020 summer semester, followed by Computer Science I and Social Media for K-12 teachers in the fall of 2020. In the spring 2021 semester, the teachers completed Application Development, and they finished their skills certificate with Robotics in the 2021 summer semester. In total, 13 teachers earned their skills certificates and endorsements in 2021.

Based on the results of the initial offering, the general format was found to be effective, and NWC will continue to serve its constituency and meet its mission by refining the K-12 CS endorsement program to remain relevant over time (Northrup et al., 2021).

In the summer semester of 2021, a second cohort of six teachers began coursework for the K-12 CS Skills Certificate. Based on feedback from the first cohort, minor modifications to the schedule were introduced, and these six teachers are on schedule to earn their CS Skills Certificates in the Summer of 2022. Future classes follow the same general outline with modifications to the schedule and content determined based on student feedback. Further curriculum refinement, including the introduction of different programming languages and environments, will be determined in cooperation with the faculty development team and student feedback.

Methods and Results

The methodology to address the research question “How does systems theory provide a model for understanding Wyoming’s universal K-12 CS education delivery?” is qualitative, using archival research methods. The theoretical framework of systems theory is used to interpret the archival research, and the methods focus on understanding the wholeness of the delivery of universal CS education in Wyoming as a social system (Bridgen, 2017). Analysis requires examining individual steps, but systems theory integrates these steps into an interconnected whole.

Credibility and Trustworthiness (Reliability and Validity)

The trustworthiness of archival research depends on the veracity of the documents studied. Bias is minimized because the documents are prepared by sources other than the researcher. AsDonaldson (2016) states, “We can trust a text if it is the work of an individual or group of individuals whom we can trust”. This study relies heavily on official documents from primary sources: policy, legislation, official press releases, and recorded minutes of meetings. The use of sequential, official documents from trustworthy sources lends itself to a systematic and incremental study with minimal ideological bias (L’eplattenier, 2009).

Because of the vast array of available documents, selecting viable documents from well-grounded sources is key to establishing credibility and trustworthiness. Studying these documents in a systematic process creates a body of evidence that can be analyzed using systems theory.

Procedures

 The archival research was conducted primarily through the electronic retrieval of official documents, searching for the “next steps” as determined by the previous step, and finding documents created based on prior documents. Every official document used was a compilation or continuation of many other documents and conversations codified into a single instance of an official document. This procedure allowed for a systematic study, resulting in a logical chronology of events leading from one to the other based on official accounts from credible sources.

Analysis

The analysis below explains the data flow and interconnectedness of the system using centrality and contextual axioms (Adams et al., 2014). The perspective follows Grothe-Hammer’s 17 view of communication as the sole component of the system.

Analysis Using the Centrality Axiom

The centrality axiom (Adams et al., 2014) facilitates a discussion of communications between nodes to identify where and how decisions are made and how they influence the rest of the system. The starting point of the analyzed system is the WIN/WIP executive action by Governor Mead to introduce universal K-12 computer science to Wyoming K-12 schools. The endpoint is Wyoming teachers who earned endorsements to teach K-12 CS through NWC. This start and end represent the current boundaries of the system, which were created to narrow the focus of the analysis. Similar systems could be constructed for teachers completing other endorsement programs. Table 4 describes the pathway for certifying K-12 CS teachers through NWC’s approved skills certificate, displaying the data flow between each system node. The chronology of the system moves forward based on data flow rather than on a narrative, as it does in traditional archival research. Data flow occurs between nodes that define the points of contribution to the system 38 .

Because full implementation is now in the earliest stage, the availability of universal K-12 CS education in Wyoming is viewed as a data flow out of the system. After full implementation, this aspect could be reanalyzed as the endpoint of the system.

Table 4: Applying the Centrality Axiom to the pathway for certifying K-12 CS teachers through NWC’s skills certificate program

Node

Description

Communication

Decision

1

Executive Action

WIN Initiative/SEA0029

Development of legislation

2

Statute SEA0029

Vote by Wyoming legislature

Majority vote passes statute into law (SF0029)

3

WDE

Content and Performance Standards

See below

> 3.1

WDE

Content Standards: CS Standards Review committee empaneled. Standards developed collaboratively to include public input

Approved 1/4/2020

> 3.2

WDE

Performance Standards: based on Content Standards, twelve teachers from the original CS Standards Review committee determined performance standards and the required deliverables from the content standards

Approved 4/7/2021

4

NWC Faculty Development Team

Development of coursework and delivery methods leading to K-12 CS endorsement for certified teachers

Creation of a program of study for K-12 CS Endorsement skills certificate

5

NWC Curriculum Committee

Faculty development team presents curriculum to NWC’s curriculum committee at the regular meeting

A majority vote approves and enacts K-12 CS endorsement Skills Certificate program of study

6

WCCC

WCCC meeting to review K-12 CS endorsement skills certificate program of study

Approval by majority vote to implement K-12 CS endorsement Skills Certificate

7

PTSB

PTSB meeting to review program of study

Approval by majority vote to certify K-12 CS endorsement Skills Certificate to fulfill requirement to certify teachers

8

Delivery of curriculum

NWC faculty team delivers the curriculum to first cohort of teachers

13 teachers earn K-12 CS endorsement Skills Certificate from NWC and certification from PTSB

9

Endorsed teachers

Future: teachers to develop and deliver universal K-12 computer science education to Wyoming K-12 students.

Future: universal K-12 computer science education delivered to Wyoming students by certified teachers

Analysis Using the Contextual Axiom

The contextual axiom examines the circumstances and factors surrounding the system (Adams et al., 2014). The system is acted upon by external inputs and delivers external outputs. Both inputs and output occur through the data flow.

Figure 1 presents a context-level system diagram. The system is teachers endorsed in K‑12 computer science through NWC, which is identical to the endpoint of the system analyzed using the centrality axiom (Node 9). Nodes 1 to 8 are external inputs and outputs to the system, and communications are data flows.

Figure 1

https://s3-us-west-2.amazonaws.com/typeset-prod-media-server/7a194100-5b6a-42e2-85aa-4f9606801391image1.png
Figure 1: Context-Level System Diagram

Results/Findings

This study outlines the process of delivering universal K-12 CS education to Wyoming K-12 students and analyzes the process using systems theory. Using the centrality axiom (Adams et al., 2014), the analysis results in a flow chart that accurately models the behavior of this delivery as an organization described as a social system. The process can also be viewed at the context level using the context axiom (Adams et al., 2014). The analysis results in a context-level system diagram that accurately models the behavior as an organization described as a social system. Together, these indicate the validity of viewing universal K-12 computer science delivery in Wyoming using systems theory as applied to a social system.

The goal (output or product) of the system is to ensure current K-12 teachers are certified to teach CS in Wyoming’s K-12 schools. The delivery of this education is viewed as a data flow from the system because universal K-12 computer science began in the 2022–2023 school year.

Limitations and Future Research

The time required to deliver universal K-12 computer science education in Wyoming is a potential topic for future study and may represent a suboptimization of the system. Zac Opps, a content expert in K-12 robotics, raised the question regarding what nodes and data flows could be optimized to increase delivery speed. As a former K-12 teacher now working for Digital Promise, a nonprofit organization with a goal of closing the digital learning gap by accelerating innovation in education and improving opportunities to learn 39 , Opps suggested that it is inefficient that a low-population state like Wyoming could take nearly a decade to implement universal CS education. While many districts have already done so, creating a reliable system in which every Wyoming K-12 student has guaranteed access is far from complete and does not meet the goal of the state of Wyoming to deliver universal CS education. Unforeseen delays in timing and other delivery barriers can be studied at the district and school levels, then considered as part of the system.

A current limitation is that implementation is in the earliest stages. Future steps should extend the systems analysis to include the full implementation of universal K-12 CS in Wyoming schools as the output of the system.

Universal K-12 computer science was implemented during the 2022–2023 school year, and future research to analyze the effectiveness of universal K-12 CS education can be determined using systems theory can be undertaken in the future.

Conclusions

“We tell ourselves stories in order to live.” So begins The White Album, an essay by Joan Didion (40 , p.1).

When Didion wrote that statement, she was referring to the basic human need to construct order from chaos. How is the disorder of life made orderly? She went on to say, “We interpret what we see, select the most workable of the multiple choices” (40, 31 , p.1). On one level, Didion’s essay refers to literature. On another level, she discusses writing a chronology from newspaper articles and firsthand accounts: in essence, The White Album is a work of archival research. Separate documents from different sources are gathered and arranged in an order that provides structure and connections between seemingly unconnected events, creating a narrative with chronology, form, and ultimately, meaning.

This study tells the narrative of Wyoming’s universal K-12 CS delivery initiative. The research question “How does systems theory provide a model for understanding Wyoming’s delivery of universal K-12 computer science education?” is answered by considering Wyoming’s K-12 CS initiative chronologically based on archival research then viewed through the lens of systems theory to give the story form and meaning.

 Whether or not Wyoming’s CS initiative is a system that attains autopoiesis is yet to be determined: Implementation began in the 2022-2023 school year, and more data should be gathered and analyzed to determine whether the system becomes self-sustaining. Systems theory tells us that if the system begins to degrade over time, the components – the communication feedback loops – can be examined to determine where problems lie. Corrections can be made at the component level to achieve the desired output. These corrections will enable the system to become self-sustaining.

 The desired output (summum bonum, or the ultimate good) may be optimization, but suboptimization may be determined to be the goal. As an example, the long time span required to implement the delivery of K-12 CS education may be appropriate if speeding things up would be too costly, or if a longer time frame is required to make the changes needed to keep K-12 CS education relevant under changing conditions.

In conclusion, the delivery of universal K-12 CS education in Wyoming can be viewed as a system. Using the centrality axiom with communication viewed as the sole component of the system, a logical and systematic view of the system is developed. Whether the system attains autopoiesis will become known when universal CS education is delivered in all Wyoming school districts and a determination of whether the system degrades or self-regulates can be made. The long time period required for implementation may indicate that the system is not optimized, but a sufficient degree of suboptimization may be determined to be the most appropriate goal. If communication between all inputs and outputs exists in a self-regulating manner, the feedback loop at each juncture can be analyzed and improvements made at the component level could result in a more efficient system.

The post Integrating Computer Science across Wyoming’s K-12 Curriculum from Inception to Implementation: Analysis Using Systems Theory appeared first on ASEE Computers in Education Journal.

]]>
Implementation of Lessons Learned to Simulation-Based Reflection in a Digital Circuits Course https://coed-journal.org/2022/12/30/implementation-of-lessons-learned-to-simulation-based-reflection-in-a-digital-circuits-course/ Fri, 30 Dec 2022 22:10:51 +0000 https://coed-journal.org/?p=4309 A unique method for promoting reflection among engineering students was used in the present study involving a digital circuits course. The method combined computer-based simulation for digital circuit design with reflective-thought prompts after a midterm exam for post-exam analysis and reflection. This method was first implemented in a microelectronics course using the SPICE simulator. Lessons learned from the initial implementation were applied to the digital circuits course. These lessons learned included the need to scaffold students in the use of the simulation tool for reflection, the need to balance frequency of reflection with student workload and fatigue, and question prompts that voluntarily elicit broad thought after a milestone event such as a midterm exam (versus a quiz). Using a published depth rubric, the assessment results found increased depth of reflection in the present course relative to the initial implementation in microelectronics. Specifically, there were increases in depth of reflection after the midterm exam in the present course versus the midterm exam and two quizzes in the microelectronics course. The increases in depth were significant relative to the quizzes. There was also an increase in the relative occurrence of broad reflections in the present course, with significant differences compared to the quizzes. Although significant differences were not found in the final exam averages based on depth of reflection after the midterm exam or participation in this reflection, results from a follow-up survey several months after the course ended indicated benefit for students. Specifically, 80% of those who competed the reflection exercise indicated a high or very high perceived benefit from doing so. Of the approximately 50% who chose not to complete the reflection exercise, the primary reasons were identified via the follow-up survey. Findings from this work align with and add to the developing literature on student reactions to reflection.

The post Implementation of Lessons Learned to Simulation-Based Reflection in a Digital Circuits Course appeared first on ASEE Computers in Education Journal.

]]>

Introduction

Reflection can be defined as thinking about what one is doing 1. Kolb’s Experiential Learning Theory says that learning occurs through doing plus reflecting on the doing; therefore, reflection is necessary to learning2. A second supporting theory is Schön’s Reflective Practitioner Theory, which states that reflection provides professionals with skills for solving complex, real-world problems and gaining a deeper understanding of the design problem 3. Reflection is closely linked to metacognition, which fosters self-regulated learning that is so important for one’s career, in higher education, or for any new scenario 4, 5, 6, 7, 8 . In particular, regular, repeated reflection promotes the development of metacognitive knowledge and skills 9, 6. Individuals who reflect and develop metacognitive skills tend to have self-directed, lifelong learning abilities, including assessment of the task at hand, evaluation of one’s skill level for completing the task, monitoring of task progress, and self-adjustment as needed 4, 5, 6, 7.

The present NSF-funded study implemented a unique method for cultivating reflection and metacognition among engineering students. It combines computer-based simulation for circuit design with reflective-thought prompts and was first implemented in a microelectronics course using the SPICE simulator10 . With microelectronics, students must analyze circuits with complex, non-linear components (e.g., diodes, transistors, logic gates), which is much more difficult than analyzing linear circuits introduced in physics courses. Therefore, after each quiz and the midterm exam, students used SPICE during the next class period to reflect on their performance by comparing their hand calculations to the simulated values. In this way, they could identify errors and improvement opportunities. Students essentially “re-took” the quiz or exam by building the circuit schematic in SPICE, setting various parameters, running the simulation, and identifying any differences between the simulated values and their initial quiz/exam answers. They then responded to the following reflective prompts in writing: “How is my solution different from the provided solution?” and “How can I use this information to improve my performance in the future?” (Benson & Zhu, 2015; Claussen & Dave, 2017).

Following this initial work with microelectronics, we applied several lessons learned in a separate course in digital circuits. Here, the same approach to promoting reflection using simulation and question prompts was implemented. One of the lessons learned was the need to scaffold students in the use of the simulation software by instructing them on the setup of the simulation. Based on focus group and survey results in the microelectronics course, students revealed their struggles with completing the reflective exercises due to the complexity and learning curve of SPICE, which is professional-grade software. Also, based on analysis of the students’ responses to the reflective prompt across the six quizzes and midterm exam in the microelectronics course, we investigated the use of a reduced amount of reflection in the digital circuits course. Here, the reflective exercise was given after the midterm exam, a higher-stakes assessment than the individual quizzes. With the microelectronics course, we suspected student fatigue in responding to the same reflective prompt after multiple quizzes, which may have been a limiting factor.

For the digital circuits course, which is the focus of this paper, we applied the method of using simulation to drive reflection using a different simulation environment. SPICE is not applicable to digital logic circuits. However, in the design of digital circuits, simulation tools are nonetheless used extensively. Typically, digital circuits are modeled using a hardware description language (HDL), such as VHDL or Verilog. The HDL models are then simulated using a logic simulation platform. In this study, we employed VHDL for modeling and ModelSim for simulation. Since logic circuits are often large in scale and complexity, logic simulation is used rather than transistor level simulation.

Digital logic courses are common, required parts of all electrical and computer engineering curricula. In these courses, students study a wide range of topics ranging from Boolean algebra and logic gates to the fundamentals of computer organization. Based on the author’s experience, the topic that students struggle the most with is sequential logic circuits (e.g., flip-flops, memories, finite state machines, etc.). The reason students struggle with these topics is that sequential logic circuits require students to keep track of the inputs and state history. This differs from combinational logic, where the output is purely a function of the circuit inputs. The added complexity that students face in analyzing sequential logic circuits is illustrated in Figure 1 . This figure shows one of the most fundamental sequential logic circuits, an RS-latch. For this circuit, the output nodes (Q and QB) are fed back to the inputs of the logic gates that produce the outputs. Thus, for a student to determine the output at any point in time, he/she must know what the inputs (R and S) are as well as the outputs from the previous state (e.g., Q(t-1)). This can be very tricky for students to analyze even in the simple case of Figure 1 . It becomes significantly more challenging when the complexity of the circuit increases (e.g., flip-flops, registers, counters, etc.). Simulation tools are of great assistance to students in these cases, as they provide a simple means to visualize the transient behavior of circuit inputs and outputs over time, as well as rapidly explore various input scenarios.

https://typeset-prod-media-server.s3.amazonaws.com/article_uploads/118a02e4-f3d9-4359-9212-44481abc3794/image/af560951-e468-4163-b537-ed306b17879a-ua4-fig1.png
Figure 1: Depiction of an RS latch, a fundamental sequential logic circuit. Student difficulty in analyzing such circuits comes from need to not only know inputs R and S, but also state history of output, Q(t-1). Difficulty of analysis is compounded by the number of logic gates in the circuit.

The goal of this work is two-fold. First, the authors aimed to improve student understanding of sequential logic circuits using the simulation-guided reflection method. A secondary goal was to improve the method itself by applying lessons learned from previous studies in microelectronics. Specifically, we aimed to improve the reflection method in two ways: 1) scaffold students to a greater degree in the use and setup of simulation for reflection, and 2) engage students to a greater degree in the use of reflection by establishing conditions conducive to reflecting, such as after a milestone event such as midterm exam. The following research questions are examined in this study:

  • RQ1) Do students reflect more deeply and broadly after milestone events, and

  • RQ2) Do students perceive simulation-guided reflection as beneficial?

Literature Review

Reflection is defined as thinking about what one is doing, which is necessary for learning, since Kolb’s Experiential Learning Theory tells us that learning occurs through doing and reflecting on the doing (Bishop-Clark & Dietz-Uhler, 2012; Kolb & Kolb, 2009). A second relevant theory, Schön’s Reflective Practitioner Theory, states that reflection furnishes designers and other professionals with skills for solving complex problems, likely enabling deeper understanding of the problem (Schön, 1987). Reflection is closely linked to metacognition, which is an important component of an engineering education since it fosters self-directed, lifelong learning abilities, which are important for any new situation (Ambrose, 2013; Ambrose, Bridges, Dipietro, Lovett, & Norman, 2010; Jamieson & Shaw, 2019; Marra, Kim, Plumb, Hacker, & Bossaller, 2017; Steiner & Foote, 2017).

Unfortunately, despite the known benefits, reflection and metacognition are typically not formally cultivated as part of an engineering education. Education scholars have called this out and suggested that more research involving reflection and metacognition in the curriculum should be published (Ambrose, 2013; Ambrose et al., 2010; Csavina, Nethken, & Carberry, 2016; Cunningham, Matusovich, Hunter, & Mccord, 2015; Marra et al., 2017). Susan Ambrose called for “opportunities for reflection to connect thinking and doing,” since students learn only when they reflect on what they’ve done (5 p. 17, 20). Ambrose continued “Why, then, don’t engineering curricula provide constant structured opportunities and time to ensure that continual reflection takes place?” (5 p. 20).

Metacognition is the act of thinking about one’s thinking or knowing about one’s knowing. A metacognitive individual can adjust or control his learning through various self-regulating behavior (Steiner & Foote, 2017; Turns, Mejia, & Atman, 2020). Metacognition therefore consists of the following two main components:

In a classic article, three elements of the first component of metacognition (i.e., knowledge) were identified – knowledge of person, task, and strategy (Flavell, 1979). The second component of metacognition includes the self-regulating elements of planning, monitoring, and evaluating one’s work on a task (Cunningham et al., 2015). Fortunately, an instructor can intentionally and easily promote metacognitive skills through practices such as reflective writings and post-exam reviews by students (Ambrose et al., 2010; Schraw, 1998; Steiner & Foote, 2017). It has been recommended that metacognitive instruction be embedded directly within regular content lessons (Pintrich, 2002).

Both self-evaluation and self-adjustment are associated with self-reflective behavior (Ambrose et al., 2010; Zimmerman, 2002). Regular, repeated, reflection is important in the development of metacognitive knowledge and skills, and reflective questions requiring a written or verbal response can promote metacognition (Schraw, 1998; Steiner & Foote, 2017). Questions from the Exam Analysis and Reflection (EAR) technique were used as the basis for the reflective questions used in the present study (Benson & Zhu, 2015; Claussen & Dave, 2017). The EAR technique prompts students to reflect as follows: “How is my solution different from the provided solution?”, and “How can I use this information to improve my performance in the future?”

Turns, Atman, and colleagues are key researchers of reflection and have developed a survey as part of an NSF grant on reflection (Award No. 1733474), with the survey focused on student reactions and resistance to reflection (Mejia, Turns, & Roldan, 2020; Turns et al., 2020). They explain the importance of investigating these student reactions, as this information can be used to improve reflective exercises, identify why a reflective exercise may not be working as expected, and ultimately enhance engagement and knowledge gains (Mejia et al., 2020; Turns et al., 2020). They identified the following student reactions and contributing factors (among others): effort and time involved, competing obligations, perceived usefulness of reflection, optionality, comfort level, and perceived need and importance (Mejia et al., 2020). Turns and Atman are core team members of CPREE, or the Consortium to Promote Reflection in Engineering Education, which was funded by the Helmsley Charitable Trust 21.

Methods and Context

Course Methods

This study was conducted in a sophomore-level Digital Circuits course in the fall of 2020. The student population (N = 61) was comprised of electrical and computer engineering majors. The structure of the course was typical, with two lectures per week plus an additional hands-on laboratory session. In the lab, students completed several assignments with HDL. By the time they were asked to reflect, students were very familiar with HDL. The assessments given in the course included homework, weekly quizzes, lab assignments, and three examinations. This study occurred during the COVID-19 pandemic; thus, the examinations were taken online, and assessments were open-book.

In our initial implementation of simulation-based reflection in a microelectronics course, students were asked to reflect after each of six quizzes and a midterm exam10 . In the present course (i.e., digital circuits), the frequency of the reflective exercise was reduced to one time, which occurred after the midterm exam. This was done to investigate the potential issue of student fatigue in responding to the same reflection question over time, which was believed to have been the case in the microelectronics course. Care was also taken to ensure that the reflection exercise was administered following a significant event (i.e., the midterm exam). This particular exam was selected because students had to demonstrate their knowledge of basic sequential logic circuits, which was the foundation for topics presented later in the course, including counters, finite state machines, memories, and datapath control.

There were some key differences as well as similarities in the use of simulation-guided reflection in the two courses. First, the circuits analyzed by students in the digital circuits course did not require an extensive amount of mathematical calculations (i.e., calculus and differential equations). Rather, the analysis relied on a solid foundation in Boolean algebra and logic and intuition of the circuit’s intended operation. Second, the computer-aided simulation environment was different. Digital circuits are simulated using a Hardware-Description Language (HDL) along with a logic simulator, whereas analog circuits require SPICE for simulation. The use of HDL to describe circuits requires that students must craft both components used and the overall simulation scenario. Users of SPICE rarely have to craft models of components used in the simulation but only have to create schematics and set the parameters. While graphical entry tools do exist for digital circuits, students in this class were asked model their circuits using plain-text VHDL files.

Reflective Exercise

The midterm exam contained 10 problems, primarily covering basic sequential logic circuits. Shortly after the second midterm, the exams (ungraded) were returned to the students. Ungraded exams were returned so that the reflection exercise would not reduce to a simple comparison of right versus wrong answers. Rather, students were encouraged to re-visit the steps they took to arrive at their answers and critically think about their results. Students were given guidance in using the simulation tool to reflect on each of the 10 problems. Participation in the reflection exercise was voluntary, and students who completed it were awarded extra credit.

Figure 2, Figure 3, Figure 4, Figure 5 illustrate the reflection process used in the digital circuits course, highlighting one of the 10 exam problems. The exam was administered online (Figure 2 ), and students worked through the problems using pen and paper before uploading their final response (Figure 3 ). Since the emphasis of the exam was on sequential logic circuits, most problems were best solved by considering transient output signal waveforms before calculating final output values. Figure 4 show the simulation guidance students were given during the reflection process. Similar guidance was provided for each of the 10 questions, along with VHDL templates and simulation scripts. This additional scaffolding was included in response to student feedback from the microelectronics course. There, students faced hurdles in using SPICE simulation (e.g., software issues, simulation setup) that were not relevant to the exercise at hand. Such hurdles were thought to overwhelm students and discourage participation in reflecting. Figure 4 also shows the results from a student simulation and the evaluation of the simulation results. Simulations were carried out using the ModelSim logic simulation environment. The simulation result provided a baseline to which the students could compare their answers and re-evaluate their work. Finally, after carrying out similar analyses for each exam problem, students were asked to respond to the following reflective prompt:

Q: Please discuss anything you learned from completing this comparison exercise.

Figure 5 shows a reflection written by a student after completing the simulation exercise. The wording of the reflective prompt was carefully chosen so as not to bias or lead students in their responses. Some composed thoughtful, critical reflections while others submitted responses that may be considered shallow and/or lacking in detail. Furthermore, some student reflections contained a great amount of detail but focused on content specific to the course material versus how they might improve as a student. Due to the subjective nature of the responses, great care was taken in assessing them using structured qualitative methods. In section 3.3, the assessment methods used to accurately categorize the responses are described.

https://s3-us-west-2.amazonaws.com/typeset-prod-media-server/29f6d7e8-3b37-4451-b15e-3790b5680b5eimage3.png
Figure 2: Example examination problem as given to students.
https://s3-us-west-2.amazonaws.com/typeset-prod-media-server/29f6d7e8-3b37-4451-b15e-3790b5680b5eimage4.png
Figure 3: Example student response to the question shown in Figure 2 showing the student’s hand calculated results and output signal waveform predictions
https://s3-us-west-2.amazonaws.com/typeset-prod-media-server/29f6d7e8-3b37-4451-b15e-3790b5680b5eimage5.png
Figure 4: Guidance provided to students to set up the simulation scenario and example student simulation result.
https://s3-us-west-2.amazonaws.com/typeset-prod-media-server/29f6d7e8-3b37-4451-b15e-3790b5680b5eimage6.png
Figure 5: Student reflection after completing the simulation.

Assessment of Student Reflections

A qualitative analysis of the responses to the reflective prompt was conducted by two analysts (i.e., author and co-author). The prompt was as follows: Please discuss anything you learned from completing this comparison exercise. The analysis was done using a rubric to assess the depth of the reflection as well as a coding scheme to categorize the reflection as either broad or specific or possibly both. The level/depth rubric was obtained from the literature and consists of four categories: 1) non-reflection, 2) understanding, 3) reflection, and 4) critical reflection (Kember, Mckay, Sinclair, & Wong, 2008). A level 1 statement (i.e., non-reflection) is characterized by a lack of serious thought or lack of evidence of understanding of a concept or theory. A level 2 reflection exhibits understanding of a concept or topic, but the reflection is confined to theory or textbook material without relation to real-life matters. A level 3 statement exhibits personal insights that extend beyond book theory by discussing practical situations. Although such statements occur rarely, a level 4 statement exhibits evidence of a change in perspective surrounding a fundamental belief in the understanding of a concept.

The coding scheme of Table 1 was used to characterize each reflection as broad, specific, or possibly both. This coding scheme was adapted from earlier work by the authors (Dickerson, Clark, & Jiang, 2020). The “specific” versus “broad” categorization might be compared to the concepts of “near” versus “far” transfer. “Near” transfer occurs when the new setting or context in which one’s learning or skills are applied is similar to the original setting, and “far” transfer occurs when skills are used in a broader range of applications or dissimilar contexts (Ambrose et al., 2010; Marra et al., 2017).

Table 1: Assessment of Reflection Content

Category

Description

Broad

Need for care/thought in one’s work; think before answering

Confidence enhanced

Want to learn from mistakes / avoid in future

Review work multiple times

Review/reflect on work to fully understand or verify,

including with simulator

Review to refresh knowledge

More time/effort needed for study/review

Specific

Enhanced understanding or application of course content,

including analysis methods

Identification of errors, including mathematical

Simulator knowhow or knowledge

All reflections were double coded to ensure reliability. The authors independently analyzed and coded all reflections. They then compared their codes and engaged in discussion to reach consensus when there was initial disagreement. The inter-rater reliability based on the intra-class correlation coefficient (ICC) for the numerical depth ratings was 0.965 based on average measures and 0.933 based on single measures, which are each associated with excellent reliability (Fleiss, 1986; Lexell & Downham, 2005).

Comparison of Final Exam vs Reflection Depth

Statistical analyses were carried out to determine whether a relationship existed between final exam score and the depth to which students reflected beforehand on the post-midterm reflection exercise. This analysis was done using Welch’s F-test, a variant of analysis of variance that does not assume equal variances. The analogous non-parametric test, the Kruskal-Wallis test, was run given the small sample size associated with one of the depth levels. A similar analysis was conducted between final exam score and participation (yes/no) in the post-midterm reflection exercise. This analysis was conducted using an independent samples t-test, which was corroborated by the analogous non-parametric test, the Mann- Whitney test (Norusis, 2005).

Follow-up Survey

After the conclusion of the course, a short, anonymous follow-up survey was administered. The purpose of the survey was to assess the impact of the reflective exercise as perceived by students several months later. A second purpose was to determine the reasons why students chose not to participate in the exercise, since approximately half of the students had not participated. A list of possible reasons for not participating was presented to students. These reasons were informed by recent research on student reactions to and resistance towards reflection in the engineering classroom (Mejia et al., 2020). Students who were enrolled in the course were contacted via e-mail approximately 8 months after the course ended. Students were reminded of the exercise through images that were embedded within the survey. Students were asked the following survey questions

  • You submit the simulation-based reflection exercise after the midterm? (Yes/No/Don’t Recall)

  • If “Yes,” Indicate the degree to which the reflection exercise was beneficial to you as a student. (1-Not at all, 2-Low benefit, 3-Neutral, 4-High benefit, 5-Very high benefit)

  • If “No,” Please indicate your primary reason for not completing the reflection exercise

    • Amount of effort or time involved to complete it, or a lack of time on my part.

    • The reflection exercise required me to write

    • It was an optional assignment, or I was doing well in this course at that time, so I didn’t need to participate.

    • The reflection exercise has minimal usefulness for this course or for my engineering education in general

    • The reflection exercise made me go outside my comfort zone or feel exposed

    • Other (textual entry allowed)

Results

Assessment of Student Reflections

Table 2 summarizes the results of the analysis of student reflections for content and depth in both the microelectronics and digital circuits courses. For the digital circuits course, 83% of the submitted reflections after the midterm exam contained content characterized as having broad implications, while 41% had specific implications. These percentages aligned with the results obtained after the midterm exam in the microelectronics course, where 74% of students’ responses contained broad content and 45% contained specific content. However, following the two quizzes in the microelectronics course, the percentages of broad responses were much smaller at 55% and 41%, respectively. These proportions were each significantly different from the proportion of responses classified as broad after the midterm exam in the digital circuits course (i.e., 83%). This was based on a z-test of proportions, with p = 0.009 and p < 0.0005 associated with quiz 3 and quiz 6, respectively. This result indicates that the perceived importance of the event preceding the reflection (i.e., a midterm exam) may impact the degree to which students think broadly about themselves, their preparation, and their performance. Thus, reflection after a milestone event, such as a midterm exam versus a quiz, may encourage students to reflect more broadly and generally.

Similar outcomes were found with the depth coding. In the digital circuits course, the average depth level of the post-midterm reflections was 2.83, whereas it was 2.69 in the microelectronics course. As shown in Table 2, the depth averages after the two midterm exams were each higher than the depth averages after the two quizzes in the microelectronics course (i.e., 2.34 and 2.20, respectively, for quiz #3 and quiz #6). This suggests that reflection after a milestone event such as a midterm exam, versus a quiz, may also be successful in motivating students to reflect to a greater depth.

Table 2: Summary of Reflections

Course

Reflection

After:

n

Average Depth

Broad # (%)

Specific # (%)

Microelectronics

Quiz #3

69

2.34

38 (55%)

37 (54%)

Microelectronics

Midterm

82

2.69

61 (74%)

37 (45%)

Microelectronics

Quiz #6

51

2.20

21 (41%)

27 (53%)

Digital Circuits

Midterm

29

2.83

24 (83%)

12 (41%)

Upon running a Welch’s analysis of variance test, significant differences were found in the reflective depth averages across the four assessments (p < 0.0005) (Norusis, 2005). Based on the Games-Howell paired comparisons test, there was a significant difference in depth between each midterm reflection and each quiz reflection. In Figure 6 , ELEC MID and DL MID refer to the microelectronics and digital logic/circuits midterm reflections, respectively. ELEC Q3 and ELEC Q6 represent the microelectronics quiz 3 and quiz 6 reflections, respectively. Thus, as shown in Figure 6 , DL MID differed from each of Q3 and Q6, since the confidence intervals for the differences did not contain zero. The same was true for ELEC MID, which differed from each of Q3 and Q6 since these confidence intervals also did not contain zero.

https://typeset-prod-media-server.s3.amazonaws.com/article_uploads/118a02e4-f3d9-4359-9212-44481abc3794/image/f21d6884-8009-4fd0-8ff3-16da763c51f3-uf4_asee_coed_13_1_4.png
Figure 6: Paired Comparisons of Depth Level

Examples of level 2 and level 3 responses are given below. There were no non-blank level 1 reflections.

  • (level 2) “I learned how clock cycles are supposed to work (I was confused on the exam). I learned that the critical path is the fastest path possible in a circuit, I didn’t realize that included an undefined answer. I also learned how clock cycles can be triggered by different things and how a line of multiple different d-flip-flips are triggered in a row.”

  • (level 3) “I learned a lot by completing the comparison of problem 4. The critical path delay, I assumed the critical path delay of the adders could be added together with no consequence. By forgetting the limitations of the inputs I really shot myself in the foot. This is a classic example of moving too quickly without really thinking about the question. We did several of these in class, so my brain wrote them off as basic and not worthy of my attention. Going too quickly and ignoring critical information has tripped me up many times before and it quite difficult to prepare for in my opinion. Despite this, I hope to correct this type of mistake on the next exam and in the future in general.”

Exam Performance vs Reflection Depth and Participation

Table 3 summarizes the results of the analysis of the final exam average score in the digital logic course versus the student’s reflective depth level after the midterm exam. The final exam score for this analysis was based on four problems that were most directly related to the content of the midterm exam and post-midterm reflection. As shown in the table, there were no significant differences in exam scores for the three reflective depth levels. Based on Welch’s F-test test, the p-value was 0.53. This result was corroborated by the non-parametric Kruskal-Wallis test (p = 0.66). Although a greater reflective depth level was hypothesized to be associated with a significantly higher final exam score, this was not the case.

Table 3: Final Exam Average vs. Reflective Depth Level (Digital Circuits)

Depth Level

n

Mean

Std. Dev.

1

31

43.8

4.8

2

5

41.3

6.7

3

24

42.1

7.6

A similar analysis was run to identify any differences in the average final exam scores based upon whether the student participated (or not) in the reflective exercise (Table 4 ). Approximately half of the students participated by submitting their work with the VHDL simulator and a written reflective response. There was no difference found in the two exam averages based on participation, with p = 0.29 from an independent samples t-test. The result based on the Mann-Whitney test was p = 0.49.

Table 4: Final Exam Average vs. Participation in Reflection Exercise (Digital Circuits)

Participation

n

Mean

Std. Dev.

No

27

43.8

4.9

Yes

33

42.2

7.1

Follow-up Survey

Approximately 58% of enrolled students responded to the follow-up survey. Of the students who responded, 52% reported that they completed the exercise, 21% said they had not completed it, and 27% did not recall. The percentage who reported to have completed the exercise aligned with the actual percentage of students who had participated in the exercise.

Of those who reported having completed the exercise, the results in Table 5 were obtained in response to the following question: Indicate the degree to which the reflection exercise was beneficial to you as a student. As shown in Table 5, 80% indicated that the reflective exercise was of high or very high benefit to them as a student. This was a good outcome for students’ perception of the value of the reflective exercise several months after experiencing it.

Table 5: Indicate the degree to which the reflection exercise was beneficial to you as a student

Answer

%

n

Not beneficial at all

6.7%

1

Low benefit

0.0%

0

Neutral

13.3%

2

High benefit

73.3%

11

Very high benefit

6.7%

1

Sample quotes from students that perceived high or very high benefit from the exercise are as follows:

  • “It gave me an opportunity to identify and fix the gaps in my understanding.”

  • “I only recall doing the simulation on one problem and it greatly changed how I looked at the problem. The problem regarding latency was extremely important and I am actually using that information in my research now, so I would consider that experience to be very important.”

Of those who reported not having completed the exercise, the results in Table 6 were provided in response to the following question: Please indicate your primary reason for not completing the reflection exercise. Although one student indicated “other,” the reasons the student listed in the text entry box directly corresponded to two categories already listed in the response options. Thus, the counts for the two pre-existing categories were updated and are given in Table 6 . The total response count is therefore one more than the number of students who responded to this question. The two reasons stated by the students for not participating were related to 1) time and effort involved, and 2) the optional nature of the assignment and/or perceived lack of need to participate. Fortunately, these are conditions or perceptions that can be adjusted by the instructor so as to encourage, motivate, and enable reflection by all students.

Table 6: Reasons for not Completing the Reflection Exercise

Reason

%

Response Count

Amount of effort or time involved to complete it, or a lack of time on my part.

57.1%

4

The reflection exercise required me to write.

0.0%

0

It was an optional assignment, or I was doing well in this course at that time, so I didn’t need to participate.

42.9%

3

The reflection exercise has minimal usefulness for this course or for my engineering education in general.

0.0%

0

The reflection exercise made me go outside my comfort zone or feel exposed.

0.0%

0

Other

0.0%

0

Discussion

In this paper, the method of using computer-aided simulation tools to drive written reflections was applied to a digital circuits course using a logic simulator (i.e., ModelSim) and VHDL. Previously, this same method was applied in a microelectronics course using SPICE 10 . In addition to adapting the method to a new course, the simulation-guided reflection process was improved. Specifically, students were provided with additional scaffolding in the use of the tools for reflection. Also, the frequency of the reflective exercise was reduced, the reflection exercise was associated with a milestone event (i.e., the midterm exam), and the reflection prompt was simplified to allow for a wider range of student responses.

To address RQ1, Do students reflect more deeply and broadly after milestone events?, the reflective exercise after the midterm exam was assessed for depth and content by the authors, and the results were compared to the previous study of the microelectronics course. The average depth of the reflections was greater with the digital circuits midterm compared to the microelectronics midterm and quizzes. This suggests that the combination of reduced frequency of reflection, simplified prompting, and deployment after a milestone event may have been successful in having students reflect to a greater depth and more broadly in the digital circuits course versus the microelectronics course.

Student exam scores versus reflective depth level and participation were analyzed with ANOVA and a t-test, respectively, in the digital circuits course. No statistically significant differences were found in exam scores based on either depth level or participation. However, this does not suggest that the reflection exercise was not beneficial for some students. This is supported by results from the follow-up survey, where 80% of students indicated that the exercise was of high or very high benefit to them. Since approximately half of the students chose not to participate in the reflective exercise, students were asked in the follow-up survey to indicate the primary reasons for not participating. The results revealed that the primary reasons for not participating were related to the amount of time and effort required to complete the exercise and students feeling it was not necessary for them to do so. These results address RQ2 (Do students perceive simulation-guided reflection as beneficial?).

Limitations

There are some limitations to this work. We arrived at the conclusion about fatigue based on the instructor’s observation and assessment. However, this conclusion could have been confirmed by asking students at the end of the microelectronics course whether fatigue became an issue for them. Therefore, the follow-up survey was sent to students in the Digital Circuits course to explore their perceptions about reflection. We recommend obtaining students’ perceptions of and reactions to reflection, in line with the research currently underway by Turns et al. and Mejia et al. 15, 20, which was discussed in the literature review.

Conclusions

The implementation of simulation-based written reflection in digital circuits following its initial implementation in microelectronics was encouraging. This was indicated by the greater average reflective depth levels, increased percentage of broad (vs. specific) responses, and student responses to the follow-up survey. This work demonstrated that the simulation-driven reflection method could easily be adapted to topics outside of microelectronics. Also, since simulation tools are common to all engineering disciplines, courses from outside electrical and computer engineering can likewise adopt this method.

There are several areas where future implementations may improve and build upon our initial work with reflection. The first recommendation is to ensure that reflective exercises are deployed after milestone events, such as examinations. Also, many students did not participate in the reflection exercise because they felt it would not significantly impact their grade in any way or otherwise was not necessary. However, reflection is beneficial for all students for their development as engineers, regardless of current performance or prior achievement. In order to increase participation, it is suggested that reflection after milestone events be made mandatory or otherwise highly rewarding in terms of recovering points to incentivize participation.

Thus, in using simulation-based reflection, it is important that the instructor strike a balance between frequency of reflection and student workload or potential fatigue. One suggestion to achieve this is to consider adding optional reflection opportunities after quizzes to recover lost points. Thus, a possible approach is a combination of optional and mandatory reflection exercises throughout the semester to ensure that all students reflect at some point during the term, for example after higher-stakes exams. It is critical that instructors scaffold students in the specific use of the simulation tool for reflection. This includes the setup of the simulation scenario to perform the reflection. For example, in this work, VHDL template files were provided to students to input their calculated circuit parameters. Students also benefited from guidance in what to look for in the simulation results when comparing them to their hand calculations.

A great future research question is the optimal amount of reflection that we should be requesting of students. The key is determining that optimal amount that balances benefit with possible fatigue.

The post Implementation of Lessons Learned to Simulation-Based Reflection in a Digital Circuits Course appeared first on ASEE Computers in Education Journal.

]]>
Generating a Classroom Pulse from Active Windows on Student Computers https://coed-journal.org/2022/12/30/generating-a-classroom-pulse-from-active-windows-on-student-computers/ Fri, 30 Dec 2022 22:01:58 +0000 https://coed-journal.org/?p=4303 With technology embedded in an increasing number of educational contexts, it is prudent to identify ways in which instructors can leverage technology to benefit their pedagogical practices. The purpose of this study was to determine if information about students’ active windows on their personal computers could provide actionable information to inform real-time instructional interventions and post-lecture reflection on practices. The active window approach mitigates issues with prior data collection methods and provides an opportunity to capture complete, real-time student computer usage without the need to install spyware. Based on observing 68 first-year engineering students and 32 second-year engineering students in large engineering lectures, we generated error rates of 4.28% with a 95% confidence interval of [2.81%, 6.04%] in a structured computer use course setting and 6.89% with [4.42%, 10.17%] in a semi-structured use setting. To illustrate the type of information active window monitoring could provide, we captured active window data from 135 students every 12 seconds for an entire 75-minute lecture. The data was averaged to generate a timeline which provided insight into how students responded to the instructor’s methods. This research has immediate practical implications in course design, instructional strategies, and engineering education research methods.

The post Generating a Classroom Pulse from Active Windows on Student Computers appeared first on ASEE Computers in Education Journal.

]]>

Introduction

Imagine standing in a large lecture hall and glancing around to gauge whether students are grasping the lecture concepts. However, rather than observing students nodding in agreement or shaking their heads in confusion, the raised lids of open laptop computers greet you. Instead of garnering a quick comprehension check, you are left wondering, “Are students paying attention or are laptops hurting learning?” As large classes become more prevalent and schools increasingly implement college-wide computing initiatives, this is the reality for numerous instructors. From one-to-one initiatives (Hayhurst, 2018; Richardson et al., 2013) to Bring Your Own Device (BYOD) requirements (Siani, 2017) in both K-12 and higher education, personal computers are embedded into educational contexts. Personal computers have been heralded for enabling interaction and supporting technology-centered instructional activities such as electronic content delivery, interactive polling, course management, and interactive software mentoring (Campbell & Pargas, 2003; Tront, 2007). However, the advantages of laptops in the classroom are often accompanied by the disadvantage of student inattentiveness. Laptops allow students to engage in media multitasking, swapping between a myriad of distracting activities including social media, gaming, and email (Langan et al., 2016; Wammes et al., 2019). Additionally, students who engage in off-task laptop activities distract neighboring students (Hall, Lineweaver, Hogan, & Brien, 2020).

Over the years as technology has become increasingly embedded into classrooms, researchers have tried to provide instructors with data to understand how laptop usage in the classroom impacts learners. From personal digital assistants to laptops, tablets and cell phones, researchers have documented both positive learning effects (Barak, Lipson, & Lerman, 2006; Doolen, Porter, & Hoag, 2003; Lohani, Castles, Lo, & Griffin, 2007; Roth & Butler, 2016; Samson, 2010; Shaw, Kominko, & Terrion, 2015) and negative learning effects (Carter, Greenberg, & Walker, 2017; Fried, 2008; Hembrooke & Gay, 2003; Junco, 2012; Kraushaar & Novak, 2010; May & Elder, 2018; Wood et al., 2012; Zhang, 2015) related to personal technology usage in the classroom. With no clear consensus regarding the impact of personal technology on learning in the classroom, researchers continue to tease out details of how student computer usage impacts learning by, for example, quantifying the amount of off-task activity in classrooms (Ragan, Jennings, Massey, & Doolittle, 2014), examining how non-academic applications like Facebook are used by students during lectures (Judd, 2014), and investigating how laptop bans impact learning (Elliott-Dorans, 2018). One trend that has emerged is that, for classrooms where instructors structure students’ computer use, learning impacts are typically positive (Downs, Tran, Mcmenemy, & Abegaze, 2015; Kay & Lauricella, 2011). That is, when instructors design the course to incorporate purposeful and deliberate computer usage, the impact of computer usage on learning tends to be positive. When student computer use is unregulated, research results are both positive and negative with regard to learning impacts. This finding should motivate instructors to embrace technology in their classrooms and learn how to use technology for their advantage.

One powerful tool that would allow instructors to use student computers to their advantage is a system that would allow instructors to see the pulse of large lectures. By capturing data that is similar to but more accurate than glancing around the classroom in order to gauge who is engaged, instructors could react in real-time to encourage more participation from students. Instructors could use a learning pulse monitor to time instructional interventions to promote active, engaged learning. We hypothesize that the active window information from student computers could provide the requisite data for determining a real-time classroom learning pulse. This study uses observational research in two different large engineering lecture courses over one semester in order to quantify the amount of error for using active window as a proxy for student attention. Then, we capture active window data electronically to illustrate how a learning pulse monitor could provide actionable information to an instructor for both real-time intervention or post-lecture reflection in order to improve instructional practices.

Why Active Window?

In information processing theory, there is a strong, direct link between attention and learning. This direct link is very clear in a quote from Mackintosh (1975), “The probability of attending to a stimulus determines the probability of learning about that stimulus” (p. 294). More recent studies have reached similar conclusions: humans learn about items that they attend to (Mitchell & Pelley, 2010). Robert Gagne has been credited with shifting the information processing discussion from the research lab to the practical realm of instructional design with his introduction of the Conditions of Learning (Gredler, 1997). Gagne’s (1965) original theory stipulates that there are nine instructional events that must occur for learning to take place, the first of which involves obtaining the learner’s attention. The instructional events do not guarantee learning will occur, but rather they support the learner’s internal mental processes. That is, each event is a necessary condition for learning to take place. While the theory has evolved somewhat since its introduction (Gagne, 1965; Gagne, 1977), attention has remained an initial event.Fleming (1987) succinctly explains why: “Quite simply, without attention [the first event] there can be no learning” (p. 236). To support computer users, who are students in our contexts, some suggest a need to design attention-aware systems that delay interruption by deferring alerts unrelated to the task at hand (Bailey & Konstan, 2006). Instead, we focus on how attention-aware systems could support instructional design. Specifically, we hypothesize that a students’ top-most, active window can be used to make a determination of that student’s attention and provide real-time data for instructors’ decision making.

Current assessment strategies used to measure student computer use in classrooms are limited. Existing research studies have explored student attention through either self-reported survey data, internet activity monitoring or through the installation of Spyware software. Survey data, by far the most common data collection method, does not provide the data resolution needed to generate a real-time learning pulse. Internet monitoring provides an incomplete characterization due to missing data related to non-internet activity (e.g., local applications). Spyware would provide the requisite data; however, significant privacy concerns and installation issues have plagued studies attempting to utilize spyware (Kraushaar & Novak, 2010; Kraushaar, Chittenden, & Novak, 2008). By relying on active window data, particularly data captured as a binary on-task and off-task determination, we attempt to balance student expectations for privacy and the need for capturing real-time data.

In courses that use classroom learning technology in order to communicate with student laptops via a server (e.g., DyKnow Vision or Classroom Presenter), active window data could be captured directly through the software. Specifically, if a student’s top-most window contains the course material, the student is paying attention (i.e., on-task) (Figure 1). If any other application is the top-most, active window (e.g., Figure 2 , Figure 3 ), the student is not paying attention (i.e., off-task). The assumption that active window indicates attention to or distraction from lecture has been previously implied (Hembrooke & Gay, 2003; Kraushaar & Novak, 2010). However, the assumption has not been directly tested for reliability. It is clear there is error associated with the method. For example, consider the layout in Figure 3 where the classroom software and another application split the screen. While the active window (the window with the mouse focus) is not the course software, it is possible that the student could be paying attention to the lecture. Similarly, it is possible that in Figure 2 , the student has non-course software as the active window, but is viewing the slides on the classroom projector. There is a need to quantify the amount of error with the active window method to understand if active window data can provide actionable information. This study uses observational data of student computer usage within classrooms to quantify the error associated with the active window method.

https://s3-us-west-2.amazonaws.com/typeset-prod-media-server/35224329-4c3b-4cfc-a899-ead19f2e9208image3.png
Figure 1: Example of Student Active Window (gray window) illustrating focus on course software [on-task]
https://s3-us-west-2.amazonaws.com/typeset-prod-media-server/35224329-4c3b-4cfc-a899-ead19f2e9208image2.png
Figure 2: Example of Student Active Window (gray window) illustrating focus on web browser [off-task]
https://s3-us-west-2.amazonaws.com/typeset-prod-media-server/35224329-4c3b-4cfc-a899-ead19f2e9208image1.png
Figure 3: Example of Student Active Window (gray window) illustrating focus on word processing software with side-by-side view of course software [off-task]

Method

Participants

The study was conducted at a large research university located in the Southeast United States. The university’s college of engineering has an established computer requirement resulting in a multitude of personal computers in classrooms. The college of engineering supports interactive learning software that establishes a communication link between instructor and student computers to facilitate the distribution of slides, polls, and other instructional activities. We purposefully selected courses in which the interactive learning software was integrated into instructional activities so that there was a clear on-task software window. Data were collected in six sections of a First-Year Engineering (FYE) course, one section of a Statics course (S), and one section of a Dynamics course (D) in the Fall semester. FYE observations and S&D observations were considered as two separate groups due to differences in the use of technology. In the FYE courses, computer use was strictly structured by the instructors. In the S&D courses, computer use was semi-structured; the instructor passed slides and annotations but otherwise usage was unregulated.

First-year Engineering (FYE sections – Structured Computer Use

The six FYE sections were part of a first-year, first-semester course, and consistent primarily of freshmen general engineering students. The sections were all large lectures with enrollment varying from 120 to 250 students and met one time a week for 50-minutes in large auditoriums. The FYE sections had five different instructors (one instructor taught two sections), but covered identical content. The instructors had weekly coordination meetings during which a common slide deck was distributed. Students were required to bring personal computers to class and used the interactive learning software to receive lecture content and to interact with instructors. Students were given initial software training during the second week of classes. The instructors actively directed student computer use throughout the lecture period with polling questions, active exercises, and student work submission (Mohammadi-Aragh & Williams, 2013).

Statics and Dynamics (S&D sections – Semi-Structured Computer Use

The S&D sections included in this study were taught by the same instructor and had the same lecture format. The Statics section was a large lecture with 228 students and met in a large auditorium. The Dynamics section was the smallest lecture with 86 students and was taught in a large classroom. Both sections met for 75 minutes twice a week. The selected instructor used a Tablet PC to distribute slides and lecture notes in real-time to students via interactive learning software. Lecture notes were also projected in the front of the classroom. The lecture usually began with a review of student selected homework problems, was followed by a short lecture covering new concepts, and concluded with example problems. The instructor used the class roster to create an interactive environment by randomly calling on students to assist him when working problems.

Before enrolling in either S&D section, students completed the college of engineering FYE two-course sequence, which used the same interactive learning software described in Section 5.1.1. At the beginning of the semester, S&D students were told that they could use the interactive learning software to capture, annotate, and save lecture content. However, students were not required to use a computer, and lecture slides (i.e., the instructor’s DyKnow file) were posted at the conclusion of each class. Only students who brought a computer to class were included in the study.

Observations

We used observations of students’ behavior to collect student attention data and information about active windows on student computers. Direct observations of student behavior are a frequent and recognized method for determining student attention in educational, behavioral, psychological research studies (Hoge, 1985; Rapport, Kofler, Alderson, Timko, & Dupaul, 2009). In determining attention, observations may focus on general behaviors, such as “on-task”, or specific behaviors, such as “playing with an object”. Focusing on general behaviors is recommended since significant and consistent evidence exists for the validity of general measures (Hoge, 1985). We used in-class, naturalistic observations, which are unobtrusive, covert observations during which the observer blends in with participants and does not affect behavior. Students were not informed that they were being observed in order to capture typical, unchanged student behavior.

Observations were conducted each week of the semester during FYE and S&D lectures. To increase validity of our estimates as a representation of total error rates, we selected students to observe using stratified random sampling. That is, we divided the class into sections (e.g., front, back, middle) and randomly sampled from each of these areas. Prior to the start of lecture, the observer would sit in a random location in the classroom, and select students whose computer screens were visible. To avoid data overlap due to neighboring students interacting, the selected students could not be sitting next to each other. Observations were conducted on a Tablet computer similar to students’ computers and the screen was shielded from nearby students. Throughout the semester, observers reported conversations with neighboring students that indicated their presence remained undetected (e.g., neighbors asked homework questions such as “What did you get for question 3?”).

Observers were trained and used an observation protocol to strengthen reliability. Figure 4 shows the observation protocol with sample data. The protocol guided observers to document student activity (Notes), the observer’s perception of student attention (A?), and the students’ top-most, active window (Window) at every minute during the lecture. Generally, a student was considered attentive if they were looking at course content or the instructor, discussing course content, working on instructor-assigned tasks, or listening to the instructor. In other words, a student was classified as on-task or attentive if they were participating in teacher-sanctioned activities (Hoge, 1985). For validity and reliability purposes, after each observation was completed, the protocol “Notes” field and the judgement of attention columns were reviewed by the research team.

https://s3-us-west-2.amazonaws.com/typeset-prod-media-server/35224329-4c3b-4cfc-a899-ead19f2e9208image4.png
Figure 4: Observation protocol with sample data and gray highlighted mismatches

Analysis Technique for Observations

For every observed participant, the observer’s perception of attention (A? column) captured a timeline of observed student attention. The record of a student’s active window (Window) was analyzed to produce a timeline of measured student attention. Following the observations, both timelines were coded with a 1 representing “y” (paying attention) and a 0 representing “n” (not paying attention). As an example, for Student 1 in the observation protocol in Figure 4 , their observed student attention (OSA) would be 1-1-1-0 while their measured student attention (MSA) would be 1-1-1-1.

Every participant’s OSA and MSA were compared for mismatches, which are instances in the timelines where the OSA and MSA are not equal. A mismatch occurs when a student is observed to be attentive, but their active window is not course software (e.g., Figure 4 : Student 2, 9:47am). In this case, the mismatch is a false negative (Type II error) since MSA is 0 but OSA (actual attention) is 1. A mismatch also occurs when a student is observed to be distracted, but their active window is course software (e.g., Figure 4 : Student 1, 9:48am). In this case, the mismatch is a false positive (Type I error) since MSA is 1, but OSA (actual attention) is 0.

Observation notes were analyzed to determine the types of activities that produce error. The degree of validity was calculated as a mismatch error rate. For each student, the error rate (ER) was calculated as the number of mismatched instances (#MI) divided by the total number of observed instances (TOI), ER = #MI/TOI. Using the error rates for each group of students, we created 10,000 bootstrap samples with replacement in order to estimate the true mean error rate for each class type (i.e., structured versus semi-structured computer use).

Electronic Active Window Monitoring for Classroom Pulse

There are a variety of ways active window monitoring could be implemented. In our case, we approached the developer for the interactive learning software used at our study site and asked them to generate the data. They incorporated a visual attention widget into the instructor panel. The widget provided instructors with a visual representation of student attention by monitoring students’ active, top-most window (Figure 5 ). The software assumed that, similar to the measured student attention from the observation protocol, if the active window on a student’s computer was the learning software, then the student was on-task. All other active windows indicated off-task behavior. We created a record of the widget’s output with screen capture software. We then processed the recordings with MATLAB’s image processing toolbox to create a spreadsheet file for analysis.

https://typeset-prod-media-server.s3.amazonaws.com/article_uploads/6eb3fce8-877a-4b34-bcb8-c1ea2508aac7/image/38c58c5f-1dbe-4119-9539-cf604f9faf3d-ufigure5_jean.png
Figure 5: DyKnow Vision’s on/off-task feature

Active windows were measured every 12 seconds for the entire lecture. For each time, average class attention was calculated by dividing the total number of attentive students (e.g., course software as top-most, active window) by the total number of students logged into the course software (Equation 1 ). The average class attention timeline was supplemented with information from observation notes and an audio recording of lecture in order to create a descriptive class timeline (i.e., start of class, start of homework review, start of new lecture material, start of practice problems related to new lecture material).

A v e r a g e   C l a s s   A t t e n t i o n =   t o t a l   s t u d e n t s   w i t h   D y K n o w   a c t i v e t o t a l   s t u d e n t s   l o g g e d   i n t o   D y K n o w       * 100                                       ( 1 )

Results

Characteristics of Attentive and Inattentive Students

Thirty-four observations sessions were conducted in eight weeks of FYE lectures. Two students were observed during each session, providing a total of 68 FYE student (6.4% of the total students enrolled in the course). One student was excluded from analysis because their computer battery died before the end of the lecture. The FYE observations have an average of 47 instances per observation.

Ten observations sessions were conducted in eight weeks of Statics lectures and six in five weeks of Dynamics lectures. Two students were observed during each session, providing a total of 32 S&D students (10.2% of the total students enrolled in the two courses). Of the 32 S&D students, two were excluded from analysis due to shortened observations (one Statics student left class early and one Dynamics student’s laptop battery died). The S&D observations have an average of 70 instances per observation.

The observation “Notes” field was analyzed to determine the characteristics of a student who is paying attention versus a student who is not paying attention. Those characteristics are listed in Table 1 . Characteristics are self-explanatory with the exception of Doodling. We note that Doodling (and listening) referred to students drawing simple patterns or sketches while occasionally looking at slides or the instructor. Doodling (and not listening) was used to indicate students who were engaged in intensive or elaborate drawings with body language suggesting deep concentration (e.g., head down and focused on artwork). The characteristics provide reliability of the observer’s determination of attention, as they aligned with instructor expectations and literature-defined protocols for attentive and non-attentive students.

Table 1: Characteristics of Attentive and Inattentive Students

Paying Attention

Not Paying Attention

Listening to the instructor

Looking at instructor

Installing software

Texting

Taking notes / writing on slide

Participating

Talking to neighbor

Spacing out

Helping neighbor on assignment

Submitting a slide

Doodling (and not listening)

Surfing the web

Doodling (and listening)

Looking at handout

Working homework

Sleeping

Answering Poll/Question)

Looking at projector

Flipping through previous slides

Reading newsfeed

Copying instructor’s notes

Asking questions

Checking email

Writing report

Participant Error Rates

Sources of error generated from the “Notes” field of the observation protocol are listed in Table 2 and are ordered from most frequent to least frequent overall. Using a second device to surf the web, email, or play a game (Reason B) was considered separate from texting (Reason C) since the length of activity was different. Not participating (Reason D) included activities such as ignoring the discussion, not advancing the slides, and reviewing past slides in order to “catch up”.

The primary and secondary sources of error were different between the two groups. In FYE sections, the primary source of error was using a second device while course software was open on the primary device (Reason B – 33 occurrences), and the secondary reason was texting (Reason C – 24 occurrences). Both these sources of error produce false positives since the active window data indicates that students are paying attention, but in reality they are not. By far the largest source of error for S&D was students leaving a non-course window open (e.g., a browser window) and looking up at the instructor and lecture slides in the front of the room (Reason A – 98 occurrences). This source of error produces false-negatives since the active window data indicates that students are not paying attention, but in reality they are attentive. The secondary source of error was students with their head down or sleeping (Reason G – 12 occurrences).

Table 2: Reasons for mismatches in OSA and MSA

Label

Reason for Mismatches

Total

FYE

S&D

A.

Student left browser/email open and looked at instructor

112

14

98

B.

Using second device (computer/slate/phone)

34

33

1

C.

Texting

30

24

6

D.

Not participating

24

18

6

E.

Student is talking to neighbor with course software open

23

14

9

F.

Screensaver on

20

17

3

G.

Head is down / appears to be sleeping

18

6

12

H.

Student is working homework with course software open

9

5

4

I.

Doodling

4

2

2

J.

Looking up answers online

2

2

0

For each participant, the error rate, primary reason for error, and total mismatches attributed to the primary reason are shown in Table 2 . Reasons are a reference back to the labels given in Table 1 . The student code in each table indicates course (F – FYE, S – Statics, D – Dynamics), the observation week (01 – 11), and then the individual student code. In Table 2 , the FYE student code represents the observed section (1 – 6) and then the student (01 or 02). For S&D, we only observed one section of each course, so there is no section code in Table 2 , and the student code indicates that the students were observed on Tuesday (01 and 02) or Thursday (03 and 04). All 97 codes indicate unique participants.

The primary reasons for students’ mismatches are distributed across all observation weeks and all observed sections. For an individual FYE student, the most common source of error was not participating (Reason D – 8 students). However, in many cases this source of error only produced a single mismatch (e.g., F03-201, F05-602). The two students with 10 or more mismatches both had second devices. F02-602 had 11 mismatches with 9 attributed to using a second computer. F06-102 has 20 mismatches with 15 attributed to playing games on a cell phone. For an individual S&D student, the most common source of error was leaving a non-course window open (e.g., a browser window) and looking up (Reason A – 16 students), or “checking in” with the lecture. The four students with more than 10 mismatches all engaged in “checking-in” behavior.

Estimate of Mean Error Rates

Based on the bootstrap analysis of the FYE data (Figure 6 , left), the mean percent error is 4.28% and the estimate of standard error is 0.82. The 95% confidence interval for FYE percent error is [2.81%, 6.04%]. The bootstrap analysis of the S&D data (Figure 6 , right) produced a mean percent error and standard error of 6.89% and 1.51. The 95% confidence interval for S&D percent error is [4.42%, 10.17%].

https://s3-us-west-2.amazonaws.com/typeset-prod-media-server/35224329-4c3b-4cfc-a899-ead19f2e9208image7.png
Figure 6: Bootstrap results for FYE and S&D

Real-time Electronic Classroom Pulse

Active window records were electronically captured from 135 students in one 75-minute Statics lecture every 12 minutes. The percentage of class time spent in the course software was calculated for each of the students and the frequency distribution for all students is shown in Figure 7 . The average percentage of on-task time varies across the entire frequency range. Twenty-eight students were in the 90-100% category indicating they remained in the course software nearly the entire course. Fourteen students were in the 0-10% category indicating they were logged into, but not using the course software for nearly the entire course. The remaining 93 students engaged in multitasking (i.e., switching between application windows).

https://typeset-prod-media-server.s3.amazonaws.com/article_uploads/6eb3fce8-877a-4b34-bcb8-c1ea2508aac7/image/ccc25751-c5cd-4368-b149-f3edb05094ce-ujean_2.png
Figure 7: Frequency Distribution of Percentage On-task for 135 Students in Tuesday’s Week Five Statics Lecture

The average class attention for the Statics lecture is plotted in Figure 8 . The timeline is annotated based on a recording of the course. The instructor reviewed homework problems starting at 3:43pm and worked practice problems starting at 4:18pm. During the homework review and practice problem sessions, gray shading is used to indicate the start and end of different problems. By annotating the average class attention time line in Figure 8 we see a clear indication that instructors effect student attention. When new material was presented, there were peaks (e.g., local maximums) in attention. Furthermore, instructor statements such as “Pay attention” also promoted attention, but the duration was short lived. Randomly calling on students while working on practice problems may be a method of returning students to lecture, but another method must be used to prolong the increased engagement, as students would return to off-task activities when they realized that they were not selected.

https://s3-us-west-2.amazonaws.com/typeset-prod-media-server/35224329-4c3b-4cfc-a899-ead19f2e9208image8.png
Figure 8: Timeline of Week 5 Activity and Percentage of Class in Interactive Learning Software

Discussion

The primary purpose of this study was to examine the validity of using students’ top-most, active window as a proxy for attention. Asserting that there is an average of 4.28% or 6.89% error depending on course type, this study provides strong evidence that active window can be a valid proxy for average classroom attention. Obviously, the final determination of acceptable error rates for future contexts should be made in consideration with the specific research or pedagogical questions under investigation. However, to give the reader perspective, until now, instructors primarily obtain student computer usage with surveys. Even with a high response rate, survey data can be inaccurate because student’s memories may not match reality (Brener, Billy, & Grady, 2003) or grade-oriented students may underreport negative behavior 40 . Kraushnaar and Novak’s 19 investigation directly comparing students’ self-reported computer use to computer use monitored by Spyware established that students underreported instant messaging use by 40%. Only 25% of students reported using instant messaging programs during class, but the Spyware record captured instant messaging use by 61% of the class. The error rates established in our investigation for the active window method are significantly less than the error rate for instant messaging self-reports established by Kraushnaar and Novak.

While active window error rates are acceptable for an application producing a general classroom pulse, they may not be acceptable for applications requiring less error. For example, the active window method may not be appropriate for assigning participation grades. Data collection in this study occurred across two distinct types of computer-infused classrooms. Student characteristics given in Table 1 and sources of error in Table 2 allow for informed decisions regarding the appropriateness of using the active window technique in classrooms that differ greatly from the study context. For example, if an instructor has observed classroom behavior similar to activities in Table 1, Table 2 , the active window method may be considered appropriate. In the future, researchers and instructors can consider how technology is used in their classroom contexts to estimate the error for their particular situation.

Our decision to treat FYE and S&D as two different groups due to the use of technology was supported by the sources of error observed in the two groups. The primary and secondary reasons for mismatches in one group were not the primary or secondary reasons for the other group. In FYE courses (structured use), students were more likely to leave the course software active and use a secondary device. In S&D (semi-structured use), students were more likely to log into the course software and then switch to off-task activities on the same device. Then, S&D students would “check-in” with lecture by glancing up at the instructor and projector; students appeared to use the projector as a second monitor, and would only switch back to the course software if they decided to re-engage with the lecture by, for example, taking notes on the instructor distributed slides. The distinction in error types between FYE and S&D courses did not appear to be related to content or instructor, as reasons for error were similar across different sections, course timing, instructors, and weeks. Instead, the differences in error appear to be related to the instructor’s use of technology. Our results provide some evidence that instructional methods produce different student behaviors in computer-infused classrooms. Future studies could investigate whether this behavior is a natural change that occurs as students progress through degree programs, or if it is more directly related to technology policies for a course.

To further explore our hypothesis that an active window monitor could serve as a classroom pulse to generate actionable feedback for the instructor, we paired near real-time active window data (i.e., data collected every 20s) with recordings of courses lectures. Our results suggest an active-window based classroom pulse could provide insight for both real-time intervention or post-lecture reflection. For example, a real-time intervention based on the timeline in Figure 8 could be related to waning attention during the middle of the lecture, approximately 4:07pm to 4:18pm. The pulse could alert the instructor that an active exercise should be executed to reengage students with lecture content. As another example, upon reflection of the course at the end of the semester, the instructor could be motivated to reduce homework review time (low attention approximately 3:43pm to 3:54pm) in future semesters. The extra time could be dedicated to working practice problems as students appear to be more engaged during that portion of the lecture. Based on our initial analysis conducted as part of the study and reported in this paper, a classroom pulse generated from active window data can show an instructor how their pedagogical techniques directly affect student engagement. We are conducting additional studies to determine how instructors respond to the data and how that subsequently effects student engagement and learning.

There are two primary limitations of the active window method. First, the active window method only provides information on whether a student is in course software or not. Second, the active window method can only be used in courses where there is a clear datum for on-task. Essentially, these two limitations combine to mean that a researcher cannot use the active window method without knowing the context of the computer use. As an example, if a portion of the lecture required students to complete an exercise on paper, then the computer active window would not be an indication of the classroom pulse. However, the instructor would be aware of this activity and surely not consider the pulse as an indication of attention.

Conclusion

As evident from the popularity of studies examining student computer use, instructors want to understand how students are using their computers in classes. We examined the appropriateness and application of monitoring the active window on student computers as a means for providing a pulse of classroom attention. To quantify error for the active window method, we observed students in two course types, structured computer use and semi-structured computer use. We quantified both false-positive and false-negative error for active window monitoring through observations of unmanipulated student behavior. The observations provided a listing of behaviors that observers classified as attentive or inattentive and a listing of behaviors that were associated with error. These listings will provide evidence to inform decisions as to whether the active window method is appropriate for alternate contexts.

In courses where students are required to use interactive learning software, electronically captured active window data has the potential to produce a real-time attention record for every student, as well as the average class attention, essentially creating a pulse for the classroom. By implementing data collection through existing interactive learning software, the method was much less invasive than spyware installations – data were only recorded during class times and no additional software was required. Active window monitoring has the potential to inform the timing of real-time instructional intervention and to help instructors improve their practice through post-lecture reflection.

Appendix

Table 3: Error Rates and Primary Reasons Label (Count) in First-Year Engineering Lectures

Student

MI

OI

Err.(%)

Reason

Student

MI

OI

Err.(%)

Reason

F02-601

1

47

2.1

E (1)

F05-101

5

50

10

C (3)

F02-602

11

47

23.4

B (9)

F05-102

0

50

0

F02-501

2

51

4.0

J (2)

F06-601

2

42

4.8

A (2)

F02-502

0

52

0

F06-602

0

42

0

F02-201

0

41

0

F06-502

2

36

5.6

E (2)

F02-202

5

42

11.9

D (4)

F06-401

0

43

0

F03-201

1

39

2.6

D (1)

F06-402

1

43

2.3

F (1)

F03-202

1

39

2.6

I (1)

F06-301

4

45

8.9

C (3)

F03-401

0

50

0

F06-302

4

46

8.7

C (4)

F03-402

3

52

5.8

D (3)

F06-101

0

49

0

F03-301

0

47

0

F06-102

20

49

40.8

B (15)

F03-302

6

47

12.8

H (5)

F07-601

2

48

4.2

F (2)

F03-101

0

51

0

F07-602

0

49

0

F03-102

0

52

0

F07-501

0

34

0

F04-601

0

49

0

F07-502

0

34

0

F04-602

7

49

14.3

G (5)

F07-401

2

51

3.9

A (2)

F04-501

0

50

0

F07-402

1

50

2

A (1)

F04-502

1

50

2.0

D (1)

F07-301

2

44

4.5

C (1) E (1)

F04-201

1

51

2.0

G (1)

F07-302

4

44

9.1

F (3)

F04-202

0

51

0

F07-101

0

50

0

F04-401

0

52

0

F07-102

0

49

0

F04-402

1

52

1.9

F (1)

F08-401

1

52

1.9

C (1)

F04-301

3

48

6.3

F (3)

F08-402

7

52

13.5

E (6)

F04-302

1

48

2.1

D (1)

F08-301

0

49

0

F04-101

0

51

0

F08-302

0

49

0

F04-102

3

51

5.9

D (3)

F08-101

1

52

1.9

A (1)

F05-601

1

51

2.0

D (1)

F08-102

3

52

5.8

A (2)

F05-602

2

51

4.0

D (1)

F09-501

0

44

0

F05-501

0

51

0

F09-502

0

44

0

F05-502

0

51

0

F09-401

1

49

2.0

C (1)

F05-401

1

51

2.0

B (1)

F09-402

0

49

0

F05-402

0

51

0

F09-301

8

44

18.2

B (5)

F05-301

7

45

15.6

F (7)

F09-302

3

43

7.0

C (2)

F05-302

4

45

8.9

B (2) C (2)

* MI = total number of mismatched instances, OI = total number of observed instances

Table 4: Error Rates and Primary Reasons (Count) for Error in Statics and Dynamics Lectures

Student

MI

OI

Err.(%)

Reason

Student

MI

OI

Err.(%)

Reason

S01-01

6

67

9.0

E (3)

S07-01

0

72

0

S01-02

3

67

4.5

E (2)

S07-02

3

72

4.2

A (3)

S02-01

4

74

5.4

I (2)

S11-01

10

75

13.3

A (4)

S02-02

1

75

1.3

A (1)

S11-02

11

70

15.7

A (11)

S03-03

6

73

8.2

A (6)

D01-03

0

72

0

S04-01

5

72

6.9

H (4)

D01-04

2

72

2.8

D (2)

S04-02

7

72

9.7

A (7)

D02-01

9

76

11.8

A (9)

S05-01

7

75

9.3

A (7)

D02-02

4

76

5.3

C (2)

S05-02

2

75

2.7

A (2)

D03-03

1

75

1.3

A (1)

S05-03

4

72

5.6

C (2)

D05-01

0

70

0

S05-04

15

72

20.8

A (15)

D05-02

5

70

7.1

H (4)

S06-01

0

75

0

D05-03

4

64

6.3

A (4)

S06-02

0

75

0

D05-04

1

66

1.5

E (1)

S06-03

5

72

6.9

A (5)

D06-01

2

66

3.0

A (1) D (1)

S06-04

20

73

27.4

A (17)

D06-02

4

66

6.1

A (4)

* MI = total number of mismatched instances, OI = total number of observed instances



The post Generating a Classroom Pulse from Active Windows on Student Computers appeared first on ASEE Computers in Education Journal.

]]>
Mobile Applications to Measure Students’ Engagement in Learning https://coed-journal.org/2022/12/30/mobile-applications-to-measure-students-engagement-in-learning/ Fri, 30 Dec 2022 21:28:26 +0000 https://coed-journal.org/?p=4293 Evidence-based instruction or active learning is being more widely implemented in college teaching, and there is a need for instructors, evaluators and researchers to quantify their implementation in order to, for example, determine the efficacy of a new instructional technique. Here we introduce a new method for measuring students’ level of engagement with their learning. The method relies on an established and research-based theoretical framework and is built in the form of a mobile application for the two most popular smartphone platforms. Five separate studies presented here establish the fidelity of the method, its ability to measure subtle variations among students within the same class, the students’ patterns of learning during out-of-class study periods, and the versatility of the app to make different measurements of learning in different contexts, including an exploratory examination of the impact of the sudden shift to remote learning prompted by the coronavirus pandemic.

The post Mobile Applications to Measure Students’ Engagement in Learning appeared first on ASEE Computers in Education Journal.

]]>

Introduction

Over the past several decades, there has been a shift in college teaching, especially in STEM disciplines, toward the use of evidence-based instructional practices (EBIP) 1, 2, 3, 4, 5, which are based on research demonstrating improved student performance when these practices are used. Many of these instructional techniques are aligned with the broad pedagogy of active learning 3 , which has as its primary goal of increasing students’ engagement in their learning. Prince 1 describes active learning as requiring “students to do meaningful learning activities and think about what they are doing” while engaging in the designed activities. Not all active learning is consistently effective 6 , however, perhaps because of other factors such as the subject being incompatible with the technique, the instructor’s lack of familiarity with the technique, or the lack of adherence to important aspects of the technique. Even with this caveat in mind, most educational researchers and those engaged with policy making 7, 8, 9 support the use of EBIP and active learning to improve student outcomes. With increased interest and implementation of EBIP and active learning, there is a need to measure the students’ level of engagement with their learning in order to satisfy professional (e.g. teaching evaluation or improvement) or research needs. In this paper, we describe a smartphone-based method for this measurement, compare its salient features to other existing methods, and demonstrate its abilities to gather information about how students engage with their learning in various engineering contexts.

Background and Context

Brief descriptions of existing methods for measuring learning engagement are provided below, as well as details of the development and implementation of our method. A common characteristic of all the methods is their reliance on measurements made during the learning activity or very soon thereafter. This element is critical since retrospective self-reports (i.e., delayed recall) are known to be highly inaccurate due to recall bias 10, 11. This measurement characteristic is also superior to retrospective recalls because it occurs in real-time, or nearly so, to a specific event of interest and in the subject’s natural ecology, which provides the data with context and ecological validity 12. While the various measurement methods are suitable for a wide range of college subjects, this paper focuses on engineering studies due to the student populations being reported here. Also included is a preliminary examination of data collected during the time of the coronavirus pandemic to investigate the impacts on the students’ patterns and habits of learning.

Existing methods for measuring learning engagement

Several methods for measuring learning engagement already exist in the literature, with several that have appeared in recently published literature and seem well suited for use in classes in which active learning or evidence-based practices are in use. While this brief review of other methods for measuring learning engagement is not meant to be exhaustive, it does present the most salient features of these methods and their advantages and drawbacks.

The Teaching Dimensions Observation Protocol (TDOP) was developed to examine, in a descriptive rather than evaluative way, behaviors and practices that are aligned with “interactive teaching” in a classroom13, 14. It is comprised of five categories that represent features of instruction, including teaching methods, pedagogical strategies, student-teacher interactions, cognitive engagement and instructional technology. Two criticisms of TDOP are its reliance on substantial judgement on the part of observers 15 and, as a result, its need for extensive training to reach acceptable levels of interrater reliability14 .

The PORTAAL (Practical Observation Rubric To Assess Active Learning) tool was designed based on a review of education research literature to identify best practices in active learning16 . It includes 21 elements that have been shown to improve student learning outcomes. PORTAAL’s creators claim that it is easy to learn, is validated, and has high interrater reliability. Its major drawback is that, because of so many elements being measured, it requires a video recording for observation and measurement. In addition, the protocol relies solely on observing the instructor, which may not always align with what students are doing.

The Classroom Observation Protocol for Undergraduate STEM (COPUS) 15 was developed to overcome several shortcomings of previous observation protocols and was specifically designed for the modern STEM classroom in which an instructor might be employing several forms of active learning activities. Its development evolved from the TDOP and, similar to that, COPUS relies on observing and categorizing what the students and instructor are doing in 2-min. intervals throughout a class meeting. The protocol categorizes these behaviors into 25 codes. Its creators claim that reliability is achieved after a 1.5-hour training period. Importantly, COPUS, as the authors acknowledged, could not judge the cognitive level of the participants since it relied solely on in-class observers for measurements.

New method for measuring learning engagement

As alluded to earlier, our method collects data from individual students rather than either observing the students and/or the instructor, or aggregating data across a cluster of students. This is achieved by building our measurement in the form of a smartphone application (or app), called Actively Learning (ALApp). In this section we describe the theoretical framework on which our measurement method is based, as well as the architecture and technological resources supporting it.

Framework for measurements

Students learn engineering in a variety of contexts and through various activities. They experience various levels of active learning through attending lectures, completing homework assignments, preparing for class, studying for quizzes and examinations, and seeking additional help. To describe these experiences for a complete measure of each student’s quality and quantity of learning engagement, we used a framework developed by Chi and coworkers: the interactive-constructive-active-passive (ICAP) differentiated learning activities 17, 18 .

The ICAP framework classifies learning activities by observable, overt actions of the learner. A passive learning activity is one in which the learner essentially engages in no overt actions. Listening to a lecture, watching a video, and reading text are examples. By contrast, an active learning activity is characterized by overt actions that demonstrate paying attention. Examples include note taking or highlighting of text. (Note that at this point “active” learning has taken on a definition that is quite different than the general use of the term in education, which would classify note taking as a “passive” learning activity. The use of “active” learning here adheres to Chi’s ICAP framework.) If the learner goes one step further and generates additional knowledge or information beyond that which is provided, she is engaging in constructive learning. Solving homework problems alone or resolving questions while reviewing notes alone are examples of this. The final category of interactive learning requires learners to interact with someone (e.g., a peer or expert) or something (e.g., a computer tutor) in order to build on the provided information. There must be an exchange of information between the members, such as defending one’s responses, responding to questions, or correcting noted errors. The conventionally accepted active-learning techniques 1, 2, 3, 4 would be classified as either constructive or interactive in the ICAP framework. Furthermore, based on the possible underlying cognitive mechanisms being activated by each kind of activity, the expected learning gains should increase in the order of passive < active < constructive < interactive; this is supported by the studies cited by Fonseca and Chi 18 .

Our measurement method adopts the ICAP framework to measure the quality of active learning (or engagement level) experienced by study participants, with passive learning being lowest quality and interactive learning as the highest. The quantity of active learning is then simply the amount of total time students expend under each of the four ICAP categories for each course. A smartphone app to capture these data is desirable since it is convenient and familiar to students and facilitates data collection and storage. The app also sends reminders to the student after each scheduled class lecture or study session, as well as a few other times throughout the day to capture other learning experiences (e.g., study or homework time, or office-hour visit). The student would then record the quality and quantity of each learning experience within the app, which stores this data locally and uploads them automatically to a server whenever it connects to the Internet.

App development and architecture

A primary product of this project is the software application that was developed for collecting student data. This section describes ALApp’s software architecture which involves the selection of software technologies and their organization. Software technologies evolve rapidly, with new technologies emerging frequently. As a result, selecting a robust, secure software architecture that will remain stable over time and can evolve to support unforeseen features is both challenging and essential.

a) Technology stack. Figure 1 shows the organization of the primary software technologies used to create ALApp. Parse Platform is a mobile backend as a service that powers the ALApp. Parse Platform provides native iOS and Android Software Development Kits (SDKs) and a push notification service. Parse Platform was chosen over alternative services for having native SDKs, a hosted cloud service, and a generous no-cost tier. The iOS version of ALApp uses the standard iOS SDK and Swift programming language. The Android version uses the standard Android SDK and Java. Developing native iOS and Android apps was chosen at the time over using a cross-platform tool, such as Xamarin or PhoneGap, based on lower perceived risk and the development team’s existing expertise. Additionally, push notifications to non-native mobile web solutions had significant restrictions compared to native apps. Parse, Inc was acquired by Facebook in 2013 and eventually shut down but released to open-source as Parse Platform. We hosted our own Parse Platform instance on a Linode server. Parse Platform provides a web Dashboard for convenient administration of the Parse database and, despite the release of several new versions over the lifetime of this study, it has remained stable enough to support additional ALApp features. The remaining pieces of the architecture include a ‘Class Scraper’ script that we use at the start of each academic term to collect the course information (course name, instructor, days and times of class meeting) from the university’s public database and to populate ALApp, and a database (MongoDB) that stores all data supporting and collected by ALApp.

https://s3-us-west-2.amazonaws.com/typeset-prod-media-server/9df3ffb3-5c56-4ff7-995b-76d4f11a44d7image1.png
Figure 1: Actively Learning software architecture

b) Database schema. Figure 2 shows the database schema implemented in Parse for the ALApp. Parse Platform uses MongoDB as its backend database. Arrows represent a Parse Platform reference from one table to another, called a Pointer. The Installation table keeps track of the Universally Unique Identifiers (UUIDs) of mobile devices required to send push notifications. The RegCodes table contains the list of approved codes to log students into the ALApp. The AvailableClasses table contains the list of classes from which students select their target courses. RegisteredClassTimes holds the list of classes students selected, and the association with a particular student. ICAPActivity holds the recorded student activity data.

https://s3-us-west-2.amazonaws.com/typeset-prod-media-server/9df3ffb3-5c56-4ff7-995b-76d4f11a44d7image2.png
Figure 2: Actively Learning database schema

c) Cloud code. ALApp uses Parse Cloud Code’s Parse JavaScript SDK to interact with the database on the server side. Cloud Code powers the push notification service, email reminder system, and a Python-based web scraper to automatically populate class data for the AvailableClasses table at the start of each academic term. Cloud Code functions ensure push notifications are sent accurately and according to the specified schedule. Functions also monitor the database to send automatic email reminders to students when expected data entries are missed, and ingest the class data from the Python scraper.

Summary of app and comparison with other methods

ALApp sends reminders (notifications) to students to record both the quantity (measured by time) and quality (based on ICAP scale) of learning engagement throughout a day in order to minimize errors due to memory recall. Notifications are sent immediately after a class meeting (Figure 3 ) and otherwise every three hours, from 10 am to 10 pm (Figure 4 ). Figure 5 shows the user interface for making a data entry in the app. Note that, from this screen, the user can tap on each of the I, C, A or P letters to pop-up a brief definition of that level of the scale as a reminder of the ICAP framework. The app is connected via the cellular network or Wi-Fi to a server that stores all of the data (and also displays prior data, which can be edited or deleted if necessary). What is stored on the server, therefore, is a database containing all users, the course or courses being tracked for each participant, and the ICAP data for both in- and out-of-class learning periods. The latter contains time spent under each of the I, C, A or P levels, the learning event (e.g., class or homework or office hour) being recorded, and the date and time of each set of ICAP entries. The database can be exported from the server and imported into common software for analysis. ALApp differs from other methods to measure student engagement in learning in three important ways: (1) It collects data from the students’ viewpoint instead of observing what the instructor does; (2) it measures learning during and outside of class meetings; and (3) data are collected from each student rather than aggregated across all students or a cluster of students in a class. A comparison of the most salient features among the various measurements methods is presented in Table 1 .

https://s3-us-west-2.amazonaws.com/typeset-prod-media-server/9df3ffb3-5c56-4ff7-995b-76d4f11a44d7image5.png
Figure 3: Screenshot from an iOS-based smartphone of post-class notifications
https://s3-us-west-2.amazonaws.com/typeset-prod-media-server/9df3ffb3-5c56-4ff7-995b-76d4f11a44d7image4.png
Figure 4: Screenshot from an iOS-based smartphone of non-class notifications
https://s3-us-west-2.amazonaws.com/typeset-prod-media-server/9df3ffb3-5c56-4ff7-995b-76d4f11a44d7image3.png
Figure 5: Screenshot from an iOS-based smartphone of the user interface for data entry. Android-based smartphones are similar.
Table 1: Comparison of various methods of measurement for learning engagement

Measurement method

Method’s features

Data recorder

Data source

Data type

Training required

In-/out-of-class data

Teaching Dimensions Observation Protocol (TDOP)

External observer

Instructor and students

Qualitative

Extensive

In-class

Practical Observation Rubric To Assess Active Learning (PORTAAL)

External observer or instructor

Instructor

Quantitative

4-5 hours

In-class

Classroom Observation Protocol for Undergradate STEM (COPUS)

External observer

Instructor and students

Quantitative

1.5 hours

In-class

Actively Learning app (ALApp)

Student

Student

Quantitative

~1 hour

In-class and out-of-class

Study Methods

All five studies reported here took place at a large, western-U.S., state-supported university. The participants were a convenience sample of compensated volunteers drawn from particular courses that were each the focus of the study, although the particular subject of each course was not salient to the study. Participation in the study did not have any effect on each participant’s grade.

Study 1 comprised 14 students taking an introductory thermodynamics course during the same academic term. The participants were a mix of engineering majors and year-of-study. The participants were trained on the ICAP framework and use of the ALApp during a ~50 min. training session conducted in person approximately one week prior to the start of the study. Study 2 involved 42 mechanical engineering students who were at approximately the same time point in their academic careers, which was the start of their second year of studies. The participants were recruited from students taking the first mechanics course (engineering statics) within a sequence of five mechanics courses in the curriculum. The participants were trained on the ICAP framework and use of the ALApp through two on-line modules created by the investigators and hosted on the university’s learning management system (Moodle). The training was estimated to take approximately 45 min. Three participants from the Study 2 sample were selected at random and their learning patterns examined in detail for Study 3. Study 4 comprised 29 students in the Software Engineering Capstone course, which had a total enrollment of 68 students. Finally, Study 5 took place during the 2020 coronavirus pandemic (while the previous four studies took place prior to it) and compared the study patterns of students prior to and during the forced shift to remote (online) learning.

Results and Discussion

Results from five studies are presented and discussed below. Study 1 and Study 2 have been presented in a prior conference 19 but are summarized here to provide context and validity for the remaining three studies that are the foci of this paper to demonstrate the types of data gathered and provide insights into students’ engagement with learning.

Study 1

The primary purpose of Study 1 was to validate the fidelity of the data recorded by the students through ALApp. Fourteen students taking an introductory thermodynamics course from one of two instructors were the participants. Instructor A relied almost exclusively on lecturing during classes while Instructor B used an active learning pedagogy that requires students to do individual work before each class meeting and, during class, to work in long-term groups to solve problems or complete quizzes. Instructor B used brief lectures (< 10 min.) to set the context for each day’s activity.

The findings from the study 19 showed the difference in pedagogy between the two instructors was clearly seen in the data: Students in the lecture class recorded nearly all Active and Passive engagement, while the active learning class recorded a mix of all four engagement levels, with a majority being at the Constructive or Interactive levels. The data also showed that variations existed between participants within each class, and these individual variations were confirmed by two investigators who attended a randomly selected class to make direct observations of students (recall that the ICAP framework relies on overt, observable actions). The agreements between the investigators and between each investigator and the participant were very good and, importantly, the data showed that variations in level of engagement did indeed exist between students within the same class. This finding points to the importance of tracking student engagement individually as opposed to an average across all or a cluster of students.

Study 2

The study period was the final four weeks of the 10-week quarter during which the students were taking the first of a sequence of mechanics course (engineering statics). The students were enrolled in one of 14 possible sections of the course, taught by six different instructors. Most instructors relied on traditional lecturing, but one instructor used an informal active learning method in which a topic was briefly introduced and the students were formed into ad hoc teams of two or three people to work through problems as the instructor roamed the class to observe and assist. The objectives of Study 2 were to confirm the variations in levels of engagement among students within the same class and to examine the students’ learning habits outside of class.

The data 19 confirmed again that, even within the same class and regardless of the pedagogy used, students cognitively experience each class differently as exhibited by their reported ICAP time distributions. These relatively smaller variations between students, however, did not mask the demonstrable difference between the instructors’ pedagogical style (i.e., more active learning class vs. a purely lecture class). The students’ out-of-class study habits were surprisingly varied, as measured by the number of out-of-class study events, with some averaging just less than once per week of such study events to over 10 times per week. The vast majority of these events were for homework but significant numbers were also recorded for office-hour visits, group study, or reviewing of notes. The variations in the students’ frequency of out-of-class entries did not, however, result in a large variation in the total amount of time spent in out-of-class studies. This finding was independent of the instructor (and therefore the instructional mode) and likely demonstrated the difference between students’ study strategies.

Study 3

The objective of this study is to examine more closely the learning habits of three participants, randomly drawn from Study 2, as they progress through the mechanics sequence within the curriculum. While these students are not necessarily representative of the entire study population, nor is this study trying to draw conclusions about particular learning habits or patterns, this initial examination provides a first glimpse of how students navigate a complex curriculum while learning increasingly challenging content under various instructional methods. The sole criterion used to select these three participants for comparison is that all three completed each course in the mechanics sequence at the intended time designated by the curriculum, and therefore all three completed each course concurrently. The majority of the 42 participants from Study 2 met this criterion and these three were randomly selected. All three students are high achieving, with current overall grade point averages above 3.65 (out of 4.00).

Table 2 examines the quality of the in-class learning experiences of the participants and shows the percent of total class meeting times for each course which was spent at the Interactive + Constructive levels of cognitive engagement. Similar to Study 1, the pedagogical style of various instructors can easily be discerned from the data. For example, all three students had the same instructor for ME 211 and it is clear that they were actively engaged for much of these class meetings. A similar conclusion can be drawn for ME 212, where two different instructors were involved.

Interestingly, the data seem to suggest that Student B was able to self-motivate and cognitively engage at the Interactive and Constructive levels during class, regardless of the instructor’s teaching style. This can be seen when comparing the data from Table II for CE 204, CE 207 Tutorial, CE 207 and ME 326 Tutorial. Student B always reported a high level of cognitive engagement while one or both of the other students, who had the same instructor, did not. This suggests, perhaps, that some students are able to motivate themselves to engage with the class regardless of the pedagogical style of the instructor. This finding again highlights the importance of measuring learning engagement for individual students instead of an aggregate of them.

Table 2: Percent of total class meetings spent at the I+C. levels

Student A

Student B

Student C

ME 211b

45.0a,1

59.21

49.01

CE 204 Tutorial

61.3

77.3

23.8

CE 204

2.92

43.72

0.02

ME 212

43.93

66.03

79.6

CE 207 Tutorial

36.64

71.64

32.8

CE 207

11.15

69.05

0.0

ME 326 Tutorial

24.26

51.46

97.5

ME 326

10.07

58.6

12.87

a. Identical numerical superscripts denote the same instructor for that course

b. ME 211=statics; CE 204=mechanics of materials I; ME 212=dynamics;CE 207=mechanics of materials II; ME 326=intermediate dynamics

The learning patterns and habits of these three students during out-of-class times are examined in Table 3 . For this comparison, only the lecture portion of each of the five courses are included (tutorials were excluded). The values shown in Table III represent the averages per student for all five mechanics courses.

As expected, doing homework was the most common out-of-class activity for all three students. These students, to varying degrees, also attended office hours, reviewed their textbook, practiced with additional problems and attended study groups. Generally, all three students used a variety of study strategies (Table 3 , first row), but the frequency of these uses varied widely (second row). While Students B and C averaged approximately 30 out-of-class study entries per course, Student A averaged 56.4 entries. Student A also averaged the most time spent per course in out-of-class studying, while the other two students had similar study times (third row). Interestingly, of the total out-of-class study times, the average time per course devoted to completing homework was roughly the same for all three students, ranging from 1471 to 1700 min. per course in a 10-week term (fourth row). This finding suggests that Student A spends a majority share (51.8%) of out-of-class studying time on activities that are not mandatory. Such a finding might suggest, perhaps, better self-regulatory behavior for Student A in comparison to another student who spends the vast majority of out-of-class time on completing homework only. It should be pointed out that Students B and C, while spending roughly one-third of their out-of-class time on non-mandatory studies, are also academically very strong, which suggests that their approach, perhaps, may be sufficient and more efficient than Student A.

Table 3: Out-of-class study habits and patternsc

Student A

Student B

Student C

Number of types of study strategies

4.8

4.4

2.8

Number of entries for all strategies

56.4

30.3

27.6

Total time of all entries (min.)

3190

2527

2341

Total time of all entries spent on homework (min.)

1538

1700

1471

Percent of time not doing homework

51.8%

32.7%

37.2%

c. Values shown represent averages for all five mechanics courses

Study 4

The purpose of this study is to demonstrate the versatility of the ALApp for conducting different studies with other objectives. One such possibility is the effort and teamwork that team members contribute while working on a group project. This study was conducted in a single-term, senior capstone project in software engineering, in which students form teams and design a software solution for an actual client. In such a learning environment, the ICAP framework of learning engagement would not be relevant but what is valuable instead is a measurement of the students’ efforts toward each project and the mode of work involved. Twenty-nine of 68 students in the course were compensated volunteers for this study. Participation in the study had no effect on the students’ grades and the course instructor was blinded as to which and how many students were participating.

For this study the participants were presented not with recording ICAP times but with four different modes of work on their project: TPIR – Team, Partial team, Individual, and Remote (online or at distance, and regardless of whether it was work done alone or with the entire or partial team). Similar to how the app is used in the previous studies, students are prompted by notifications immediately after a class meeting or at fixed intervals otherwise to enter the amount of time they have worked on their project in each of the four possible modes. The modification of the ALApp to accommodate this study was simple and merely involved flagging the participants in this study differently than the other participants in the server’s database (two additional studies using the ICAP framework were simultaneously being undertaken). This flag triggered the ALApp to fetch the TPIR categories and present these to the participants instead of the ICAP categories. The notifications did not need to be modified for use with this study.

Figure 6 presents the time distributions of all 29 participants for the 10-week project. Each participant’s total time spent on the project is shown as a stacked column that includes time working remotely, individually, as a partial team, or with the entire team. The number above each column is the total number of entries by that participant in ALApp. Each separated cluster in Figure 6 denotes a separate project team; no team had all members participate in this study.

The variation in effort (as measured by total time spent) among the participants is striking, varying from 12.8 to 100.8 hours over the 10-week project. Although there was some relationship of this time between team members of the same project (e.g., members 1, 2, and 3; total times of 70.0, 76.7, and 87.3 hours, respectively), there was also evidence of large disparities within a team (e.g., members 11-15; total times of 69.2, 66.3, 32.3, 24.7, and 79.0 hours). Finally, the differences in how each team member worked on the project is also apparent in the data. For example, members 1-3 worked almost exclusively either individually or with the whole team, while members 26 and 27 worked roughly equally independently, in a partial team, or with the whole team. These differences may reflect the preferences of team members or could have been necessitated by the nature of the project, though we cannot make this distinction from the data.

Figure 6 also shows the different patterns or habits by which the participants worked on the project, as demonstrated by the number of entries that each participant made through the app. There was a large variation in this measure, from 8 total entries (or an average of less than one entry per week) to 59 (nearly six entries per week). While there is some relationship between the number of entries and the total time devoted to the project, there were also exceptions. For example, participants 11 and 12, who were members of the same team, made 32 and 18 total app entries. Their total time spent on the project, however, were similar at 69.2 and 66.3 hours, demonstrating the different preferences of frequent/shorter work periods versus less frequent/longer work periods, or perhaps such differences signal the different requirements of each team member to complete their respective tasks for the project.

While ALApp can make measurements of effort toward a project, it cannot, by itself, speak to the efficiency of this work, the value of the effort, or the effects of working individually vs. collaboratively and remotely vs. face-to-face. To do that, measures of quality, such as individual or team project scores or peer evaluations of team members need to also be considered. In a future publication we will explore how measurements made through ALApp are correlated with or support such measures of performance, both individually and as a team, and what they tell us about the efficiency of the participants in achieving their individual performance.

https://typeset-prod-media-server.s3.amazonaws.com/article_uploads/f31051ba-e2fb-45ac-942f-f0654219371c/image/1bf6da8a-95d5-4df1-a2c2-cf50561e93b7-ufigure4chen.png
Figure 6: Time distributions among all 29 study participants for a senior software-engineering capstone project. Each separated cluster represents members of the same team. The number above each participant is the total number of entries by that participant during the 10-week-long project.

Study 5

On March 14, 2020, just prior to final examinations for the winter term (the university operates on a three-term, September to June, academic year), the university shifted all further instructional activities to remote, online learning due to the coronavirus pandemic. This resulted in the spring term being completely online with little time for students and faculty to prepare for the transition. The multitude of changes and adaptations presented an opportunity to use ALApp to examine how the pandemic affected the students’ patterns of learning. We did this by looking at the same students’ study patterns before and during the pandemic term. (We also recruited and trained a new cohort of students at the start of the shift to online instruction with the goal of looking at different students’ study patterns in the same course before and during the pandemic. We plan to describe the full research findings in a separate publication in the future since the focus on this paper is on the capabilities of the ALApp as a research tool.)

In the academic term prior to the pandemic, 16 mechanical engineering students formed the final cohort that had been tracked for five prior consecutive terms through the ALApp to learn about their study patterns and habits in the first two years of engineering studies. Here we compare their study patterns immediately before and during the shift to online learning. Figure 7, Figure 8 shows for each student their in-class distribution of learning engagement at the Interactive plus Constructive (I+C) levels during the two comparison terms. These two levels of engagement are highlighted since they are the most cognitively engaging 17, 18 and likely the most challenging to achieve in an online format. This analysis includes only lecture courses (as opposed to laboratories or tutorials) in science, math and engineering courses, and the number of courses ranged from two to five depending on each student’s schedule.

https://s3-us-west-2.amazonaws.com/typeset-prod-media-server/9df3ffb3-5c56-4ff7-995b-76d4f11a44d7image6.png
Figure 7: Per-class average levels of engagement at the I+C levels, expressed as a percentage of the average for each class meeting(i.e., I+C+A+P = 100% for every class) to account for the variation in class duration. Data derived from the pre-pandemic, Winter ’20 term. For each of the 16 students tracked, the number of classes for that student is shown above the student identifier (e.g., “Stdnt 1” had three classes in the Winter ’20 term and five classes in the Spring ’20 term). For the classes in each term, same numbers or letters along the x-axis indicate the identical class during that term.
https://s3-us-west-2.amazonaws.com/typeset-prod-media-server/9df3ffb3-5c56-4ff7-995b-76d4f11a44d7image7.png
Figure 8: Per-class average levels of engagement at the I+C levels, expressed as a percentage of the average for each class meeting (i.e., I+C+A+P = 100% for every class) to account for the variation in class duration. Data derived from the pandemic, Spring ’20 term. For each of the 16 students tracked, the number of classes for that student is shown above the student identifier (e.g., “Stdnt 1” had three classes in the Winter ’20 term and five classes in the Spring ’20 term). For the classes in each term, same numbers or letters along the x-axis indicate the identical class during that term.

Several features are prominent from a visual inspection of Figure 7, Figure 8 . First, the frequency of engagement at the Interactive level was much higher in the pre-pandemic term than the pandemic term. This is not surprising given the difficulty of having students interact with one another during online classes, although it is worth noting that several classes achieved this to a significant extent on average (Figure 8 ). Second, it is mildly surprising that during the pandemic term, many classes still engaged students at the I+C levels when it might be expected that instructors would rely on pure lecturing in online classes, which would have led to mostly Passive and Active (i.e., listening and notetaking) levels of engagement. Still, however, the frequency of classes achieving I or C levels of engagement was substantially lower, as shown in Table 4 . Of the total classes tracked by the 16 students during each term, the percent of class meetings during which no engagement at the I+C, I or C levels were all substantially higher in the pandemic term compared to the pre-pandemic term. Surprisingly, however, the average percent of class time spent at the A+P levels was nearly identical between the two terms (last row of Table IV), suggesting that those pandemic-term classes which engaged students at the I or C levels did so at substantially higher amounts compared to the pre-pandemic term. This further suggests two possible reasons for this finding: (1) these instructors invested great efforts to design class meetings that engaged students in substantial amounts of learning that required the constructing of new knowledge, or (2) the students adapted during this period of challenging learning.

Table 4: Variation in engagement levels during the two comparison terms

Winter ‘20pre-pandemic

Spring ‘20pandemic

Number of classes tracked

50

59

Percent of class meetings with no I+C engagement

6.0

11.9

Percent of class meetings with no I engagement

34.0

62.7

Percent of class meetings with no C engagement

8.0

15.3

Average percent of class time spent at A+P levels

38.9

39.1

For the same set of courses represented by the data of Figure 7, Figure 8 , Table 5 examines the students’ use of out-of-class learning times, averaged over all courses for the 16 students during the two comparison terms. The results show that the students, on average, used somewhat less variety in their study strategies (i.e., fewer types of entries), lower frequency of studying (i.e., fewer entries), and lower total time spent per class outside of class meetings. Note that the results from both terms show high variations (as shown by the large standard deviations), demonstrating the widely varying ways that students studied for each class. In addition, the average percent of out-of-class time spent on homework decreased slightly during the pandemic term which, when combined with the lower total time devoted to each class, means that time devoted to homework was substantially reduced. What is not clear, and perhaps more important, is whether these changes occurred due to lower levels of motivation or engagement with the classes because of the shift to online learning, or the courses were fundamentally changed to require less engagement because of the circumstances brought on by the pandemic, or perhaps by stress-induced distractions caused by the pandemic.

Table 5: Out-of-class study habits and patternsd

Winter ‘20pre-pandemicAverage (Std Dev)

Spring ‘20pandemicAverage (Std Dev)

Number of types of study strategies

2.9 (1.2)

2.4 (1.2)

Number of entries

17.7 (12.0)

15 (14.5)

Total time of all entries

1767 min. (1225)

1488 min. (1093)

Percent of time spent on homework

69.4% (28.3)

67.6% (30.4)

d. Values shown represent averages for all 16 students across all courses

Summary and Conclusions

We introduced here a new method for measuring the level of student engagement with their learning. This method was developed within an engineering-learning context but we believe it is applicable to most college-level disciplines. Furthermore, it is suitable with nearly all pedagogies currently in use in higher education. The method, called ALApp, is built in the form of mobile applications for the smartphone and is based on a well-researched educational framework designed to evaluate student engagement.

ALApp shares many common features with other modern methods of measurement of student engagement. These include data recording at or near the time of each learning event to eliminate recall bias, quantitative measures of both the quality and quantity of student engagement, accommodations for active learning or evidence-based instructional practices, and a reasonable level of training by the user for accurate measurements. ALApp differs from the other measurement methods in three ways: (1) Measurements are made by individual students rather than relying on observing the instructor or representative students; (2) measurements made at such a student level captures differences between students instead of averaging over a cluster of observed students; and (3) both in- and out-of-class learning are measured.

We described two studies that support the measurement accuracy of ALApp and its ability, through examination of the data, to discern both the type of pedagogy in use and the sometimes subtle differences between students in the same class. A third study demonstrated how ALApp is able to reveal the learning patterns and habits of engineering students in a single or a sequence of courses, and how these patterns shed light on the complex ways that students approach learning. A fourth study demonstrated the versatility of the ALApp and, rather than measuring student engagement at a cognitive level, it was adapted to measure individual students’ contributions to a group project. The final study, which took place during the academic term that was forced by the coronavirus pandemic into an online mode, revealed the subtle yet substantial differences in students’ learning patterns as a result.

This project had its start in 2014, when we decided to build a tool for gathering student learning engagement data during both in- and out-of-class times. Based on the features and functions that we required, and to address our concerns with ease-of-use and security around data and user-privacy, we believed that building a native mobile application was the best solution. In fact, we saw no other option. Admittedly, we could have gone in many directions with the app’s design and architecture, but with our team’s background and skillset, ALApp was created. Today, there are options beyond a native app for accomplishing the original goals of this project, but we still believe that a native app is the optimal solution.

While we do not anticipate that ALApp would be suitable for instructor use for assigning grades or participation points since it could be easily manipulated by the participant, we do foresee its use as a research tool for educational research, as an evaluative tool for measuring classroom practices and or instructional efficacy, and perhaps even as a student tool to evaluate a course and to provide course reviews to prospective students.

The post Mobile Applications to Measure Students’ Engagement in Learning appeared first on ASEE Computers in Education Journal.

]]> Use of Open-source Software in Mechatronics and Robotics Engineering Education – Part II : Controller Implementation https://coed-journal.org/2022/12/30/use-of-open-source-software-in-mechatronics-and-robotics-engineering-education-part-ii-controller-implementation/ Fri, 30 Dec 2022 20:54:13 +0000 https://coed-journal.org/?p=4280 This paper is the second part of a two-part study on promoting the use of Open-Source Software (OSS) in Mechatronics and Robotics Engineering (MRE) education. Part I demonstrated the capabilities and limitations of several popular OSS, namely, Python, Java, Modelica, and GNU Octave, in model simulation and analysis of dynamic systems, through a DC motor example. The DC motor was chosen as a representative of a large class of dynamic systems described by linear differential equations. The perceptions of MRE community members about the OSS and their applications, gathered through an online survey, were also presented in Part I. In this paper, another fundamental pillar of MRE systems development, i.e., controller implementation, is considered.

The post Use of Open-source Software in Mechatronics and Robotics Engineering Education – Part II : Controller Implementation appeared first on ASEE Computers in Education Journal.

]]>

Embedded code is currently not displaying correctly in html view. Please view PDF while we address this issue. – COED Editorial Team

This article is the second of two-part article series discussing the use of Open Source Software in Mechatronics and Robotics Engineering. View Part I in Volume 12 Issue 3

Introduction

The field of Mechatronics and Robotics Engineering (MRE) has experienced an organic and rapid growth in the past few decades, mainly thanks to all the technological advancements in control systems, electronics, computers, and connectivity and increased demand for robotics and automation in industry. This ongoing progress has increasingly resulted in the development of new job roles such as mechatronics or robotics engineers and specialists. To prepare the next generation of engineers to fulfill these responsibilities, various stand-alone courses have been offered in Mechanical Engineering, Electrical Engineering, and Computer Science departments. In recent years, there has been a transition in higher educational institutions to develop minors and majors in Mechatronics and/or Robotics Engineering to meet the industry demands. Authors in1 share their experiences and the lessons they learnt during 10 years since they started one of the first Robotics Engineering programs in the United States.

An essential part of any MRE educational program should be to provide its students with an interdisciplinary knowledge of mechanical, electrical, computer, software, and systems engineering. Robotics courses have traditionally provided an opportunity to educate the students with such an interdisciplinary knowledge. The entertaining nature of the robots has further established them as attractive learning and motivational platforms for K-12 and freshman students 2, 3, 4. In the past few decades, numerous efforts have been undertaken to develop different robotics courses, some of which are reported in 5, 6, 7, 8, 9. Lessons and experiences gained through these valuable efforts have also been published in the literature to provide a roadmap for the community members who plan to offer robotics courses or to develop new ones 10, 11, 12, 13, 14. Although some of these courses use a commercial robot platform such as Lego Robotics 9, 14, VEX Robotics 15, Turtlebot 16, 17, etc., others employ a custom-built robot platform using open-source hardware such as Arduino18, 19, 20 and Raspberry Pi 15, 21. The Open-source Software (OSS) in robotics courses have mainly been in the form of a software for microcontroller programming or hardware interface, such as C++ for Arduino, Python for Raspberry Pi, and Robot Operating System (ROS) as overall robot software framework 17, 22. In this work, as the second part of a two-part paper, the OSS such as Python, Java, Modelica, GNU Octave, and Gazebo are used to implement a PID controller for 2-DOF robot arm simulations. While this robot arm is simple to be implemented in an undergraduate-level course, it is complex enough to expose the students to more advanced topics of simulation and control design for nonlinear dynamic systems.

Control Systems have been the cornerstone of many of the technological advancements since early 20th century. They play a fundamental role in industrial automation, transportation, energy industry and other emerging areas such as robotics, manufacturing, IoT applications, and cyber-physical systems. Therefore, majority of related engineering disciplines include several control courses in their educational curricula. The MRE field, in particular, relies heavily on control systems as controls can be thought of as joints linking various disciplines involved in the system. Consequently, MRE students need to master the design and implementation of control systems to be successful in their careers and to be able to design smart and autonomous systems and processes that will improve human life and welfare.

Commercial products such as Matlab provide extensive and convenient tools for the design and implementation of control systems. Although Matlab and its related toolboxes are commonly available to students in higher education institutions, the students typically lose access to the complete suite of Matlab products once they graduate. Moreover, a lot of industries are migrating towards using the OSS, due to their numerous advantages such as lower ownership costs, higher flexibility and customizability, improved reliability and accessibility, and wider community support. Application of the OSS in developing and implementing control algorithms can further expose the students to the development details of control systems, an aspect that is typically overlooked when using advanced tools such as Matlab and its toolboxes. Therefore, familiarizing the students with the application of the OSS for control implementation can equip them with the skills they would need in the future.

As mentioned earlier, the 2-DOF robot manipulator is considered in this work as a showcase to demonstrate the application of the OSS in control implementation and closed-loop simulation. The dynamics of the robot manipulators are governed by Euler-Lagrange equations, which result in nonlinear differential equations. Therefore, the example of a robot manipulator is a representative of a larger class of dynamic system. The robot manipulator is assumed to be controlled using a discrete-time PID controller to follow a pre-defined reference trajectory. The PID controllers are extensively used in various applications and contribute to majority of industry controllers. Although most MRE students get exposed to PID controllers and their design, they seldom get to implement and tune a PID controller from the ground up. Implementation of PID controllers using the OSS can familiarize the students with the entire process of the design and implementation. Furthermore, they would also be prepared for practical implementation of PID controllers as the OSS can be used onboard MRE hardware. The ultimate goal of this work is to promote the use of the OSS in MRE education and help with their widespread adoption. The OSS in this work can be introduced/used in a wide range of MRE-related courses, from freshman introductory to senior and graduate-level advanced courses and even, senior design projects, offered in Mechanical Engineering, Electrical Engineering, Mechatronics and Robotics Engineering, and Computer Science programs. Some of such courses include: introduction to computing/programming, dynamic system modeling, introductory and advanced courses on controls, mechatronics, and robotics, etc.

This paper is organized as follows: Section 2 outlines the dynamics of the 2-DOF robot manipulator including the governing Euler-Lagrange equations and the parameters used in simulations. Section 3 details the controller implementation and the desired closed-loop behavior. Furthermore, important code snippets demonstrating the controller implementation using each of the OSS are given in this section. Finally, Section 4 provides an overview of the capabilities, limitations, and the potentials of each of the OSS to be used in MRE education.

Robot Manipulator Case Study

The robot, considered in this work, is a 2-DOF planar arm, as can be seen in Figure 1 below.

https://typeset-prod-media-server.s3.amazonaws.com/article_uploads/9f3a72fe-559f-4ce4-9a89-d268049c2c15/image/87e75aa5-d94d-459b-8d99-770fdec04ced-up2f1.png
Figure 1: Two-link planar robot arm schematics, as illustrated in23 Fig. 4.4

The Euler-Lagrange equations describing the dynamics of this robot can be written as

M ( θ )   θ + C ( θ , θ ˙ )   θ ˙   + g ( θ ) = τ

where θ = [θ1 , θ2]T is the vector of joint angles and the inertia matrix, M, Christoffel matrix, C, and the gravity vector, g, are

M θ = I 1 + I 2 + m 1 r 1 2 + m 2 L 1 2 + r 2 2   +   2 m 2 L 1 r 2 cos θ 2 I 2 + m 2 r 2 2   +   m 2 L 1 r 2 cos θ 2 I 2 + m 2 r 2 2   +   m 2 L 1 r 2 cos θ 2 I 2 + m 2 r 2 2 C θ , θ ˙ = m 2 L 1 r 2 s i n ( θ 2 ) θ ˙ 2 + b 1 m 2 L 1 r 2 s i n ( θ 2 ) ( θ ˙ 1 + θ ˙ 2 ) m 2 L 1 r 2 s i n ( θ 2 ) θ ˙ 1 b 2 g θ = ( m 1 r 1 + m 2 L 1 ) g c o s ( θ 1 )   +   m 2 r 2 g c o s ( θ 1 + θ 2 ) m 2 r 2 g c o s ( θ 1 + θ 2 )

The robot links are assumed as slender rods and the parameters chosen for the simulations are summarized in Table 1 .

Table 1: Typical DC motor parameters used in simulations.

Parameter

L1

L2

r1

r2

m1

m2

g

Value

0.25 m

0.25 m

0.125 m

0.125 m

0.5 kg

0.5 kg

9.81 m/s2

Parameter

I1

I2

b1

b2

Value

m1L1 2/12

m2L2 2/12

10-1 Nm-s/rad

10-1 Nm-s/rad

Using Denavit-Hartenberg convention, the forward kinematic equations describing the Cartesian coordinates of the end-effector as functions of individual joint angles can be written as

x e = L 1 c o s θ 1 + L 2 c o s θ 1 + θ 2 y e = L 1 s i n θ 1 + L 2 s i n θ 1 + θ 2

Finally, using a geometric approach, the inverse kinematic equations for this robot will be

x e = L 1 c o s θ 1 + L 2 c o s θ 1 + θ 2 y e = L 1 s i n θ 1 + L 2 s i n θ 1 + θ 2

It should be noted that Equation 4 corresponds to the elbow-down solution of the inverse kinematic problem. The elbow-up solution can be obtained using

θ 2 = a t a n 2 1 D 2 , D θ 1 = a t a n 2 y e , x e + atan 2 L 2 s i n θ 2 , L 1 + L 2 c o s θ 2

where the variable D is defined as

D = x e 2 + y e 2 L 1 2 L 2 2 2 L 1 L 2

and can be used to investigate the reachability of the given end-effector coordinates. The atan2 function in Equation 4 and Equation 5 is used to account for the quadrant. Note that the notation used for this function, i.e. atan2(y, x) is to comply with numerical software packages, which is different than the notation used in some Robotics textbooks 24.

Closed-loop Simulation of the Robot Arm

In this section, the OSS are used to simulate the closed-loop performance of the 2-DOF robot arm, introduced in Section 2. The robot is assumed to start from an initial state of θ ( 0 )   =   [ θ 1 ( 0 ) ,   θ 2 ( 0 ) ,   d θ 1 / d t ( 0 ) ,   d θ 2   / d t ( 0 ) ] T   =   [ π / 2 ,   0 ,   0 ,   0 ] . Forward kinematics in Equation 3 is then used to calculate the cartesian location of the end-effector. Furthermore, it is assumed that the robot end-effector is to track a reference trajectory. Figure 2 shows the implementation flowchart of the closed-loop robot simulations used to track the reference trajectory. This structure, with some minor variations, is followed in all of the introduced software.

The implemented algorithm begins with an initialization stage where memory allocation occurs and the robot properties, solver options, desired path characteristics, and controller properties are specified. Next, the algorithm executes a “for” loop structure that spans the entire simulation duration. The desired robot trajectory is then generated inside the loop such that the end-effector follows a linear path from its initial position until it reaches a previously-defined circle, at which point it dwells and stays fixed for a certain period of time, and then continues to track the circle. Once the desired position of the end-effector on the reference trajectory is determined, the required joint angles are calculated using the robot inverse kinematics in Equation 4 . For simplicity, the actuator dynamics are ignored and joint torques (assumed to be bounded between -10 and 10 N.m) are considered as the control inputs. A discrete PID controller is then used to calculate the joint torques needed to track the desired joint angles and consequently, the desired end-effector position. Once the actuating torques have been obtained, the robot dynamics are simulated with an ODE solver and the robot joint torques, state history and simulation times are stored.

The code structure here closely follows how a discrete controller is implemented in practice. Furthermore, PID controllers are extensively used in industry and academia. Therefore, familiarity with PID controller implementation using various OSS can be extremely beneficial for MRE students and professionals.

https://s3-us-west-2.amazonaws.com/typeset-prod-media-server/23c985fb-3890-404d-a685-b75d990e3ab1image8.png
Figure 2: The flowchart of the closed-loop robot simulations.

Python

Considering the nonlinear and coupled nature of the robot dynamics in Equation 1 , they are solved directly using the solve_ivp command from the scipy.integrate library of Python. One possible way to use this command in simulating the robot dynamics is:

Figure 3: Python for Equation 1

It should be noted that the simulation of the robot dynamics is performed iteratively within a for loop over the entire duration of the simulation, i.e. time seconds, to facilitate the controller implementation. Therefore, t and tNext are the beginning and end of one iteration of the simulation with a step time of dt seconds. The variable state is used to denote the entire state vector of the robot, i.e. [ θ 1   ,   θ 2   ,   d θ 1 / d t   ,   d θ 2   / d t   ] T . It is initialized as below before the beginning of the loop:

Figure 4:

The variables θ (denoted by th) and dθ/dt (denoted by thdot) are initialized individually using the np.array command and then concatenated with the np.concatenate command to form the initial state vector. It should be noted that the initial state vector should be updated within each iteration of the loop and hence, the command state0 = np.squeeze(state[:,i]. The reason for using the squeeze command to generate the initial state vector for integration at each step is the syntax of the solve_ivp command as it can only accept 1-D arrays.

The callable function model includes the Euler Lagrange equations in Equation 1 represented in a state-space form:

Figure 5:

where M_mat, C_mat, and g_mat are the system matrices in Eq. , defined in separate functions. As an example, the Christoffel matrix C_mat is defined as:

Figure 6:

One of the main challenges for MRE professionals, especially those more familiar with Matlab, when using Python for dynamic system simulations is dealing with indices, array indexing, and array generation within loops. Python lists and array start from an index of 0 as opposed to Matlab which starts from 1. Another challenge is that Python arrays typically lose a dimension when indexing. This will be observed when choosing the ith column of the input vector tau to give as the input to the system at each iteration. The vector tau will have a dimension (size) of 2 × (i+1) at the ith iteration. Therefore, the expression tau[:, i] should have a dimension of 2 × 1, whereas upon closer investigation, it can be seen that this expression is a 1-D variable with a size of (2, ). This discrepancy can be problematic in subsequent vector and matrix operations. Therefore, in this code, the custom function exp_dim, defined as below, is used to expand the dimensions of the array.

Figure 7:

Another challenge with Python is appending columns to a matrix within each loop iteration. Despite being very straightforward in Matlab, this functionality in Python requires commands such as np.column_stack. Once the state variables are updated at each iteration, the Cartesian position of the end-effector can be obtained using the forward kinematics (FK_fun)

Figure 8:

Note that in this code snippet, the expressions xe_act += [xe_new] and ye_act += [ye_new] are used as a way for creating a list to store the actual end-effector position, another less-straightforward feature of Python. To be able to use these syntaxes, the lists should be initialized as xe_act = [xe0] and ye_act = [ye0] where xe0 and ye0 are the initial coordinates of the robot end-effector.

Finally, as mentioned earlier, the control input needed to ensure trajectory tracking is obtained using a PID controller. This controller is implemented within a for loop as below:

Figure 9:

where e is the joint angle tracking error, thd is the desired joint angles, E is an approximation of the error integral, edot is an approximation of the error derivative, tau is the control input, and tau_max and tau_min are the maximum (+10 N.m) and minimum (-10 N.m) bounds on the control input, respectively. The controller parameters, tuned to track the reference trajectory, for the 2-DOF robot considered in this work are:

K p = 20 0 0 20 K d = 2 0 0 0 . 1 K i = 40 0 0 40
https://s3-us-west-2.amazonaws.com/typeset-prod-media-server/23c985fb-3890-404d-a685-b75d990e3ab1image11.png
Figure 10: Desired versus actual joint angles for the 2-DOFrobot.
https://s3-us-west-2.amazonaws.com/typeset-prod-media-server/23c985fb-3890-404d-a685-b75d990e3ab1image10.png
Figure 11: Control input for trajectory tracking of the 2-DOF robot.
https://s3-us-west-2.amazonaws.com/typeset-prod-media-server/23c985fb-3890-404d-a685-b75d990e3ab1image12.png
Figure 12: Desired versus actual trajectory for the 2-DOF robot.

The simulation results of the controller implementation can be seen in Figure 10, Figure 11, Figure 12 . The red dotted lines represent desired signals whereas the blue lines are the actual values of the signals. The initial transient is because of the instantaneous, rather than gradual, change in the velocity at the start and end of the profile. The errors along the paths are caused by a combination of PID gain tuning and how fast the profile is traversed.

GNU Octave

In Octave, the for loop needed for the simulations is implemented as below:

Figure 13:

The variable state is used to denote the entire state vector of the robot, i.e. [θ , dθ/dt]T where θ= [θ1 , θ2]T and dθ/dt = [dθ1/dt , dθ2 /dt]T for all simulation times. The variables θ, θd, and dθ/dt are denoted by th, thd, and thdot, respectively.

The instance method model from the user-defined TwoLink object named RR implements the Euler Lagrange equations in Equation 1 represented in the state-space form. The model syntax is:

Figure 14:

where obj refers to an instance of the Twolink class, M_mat, C_mat, and g_mat are the system matrices in Eq. , defined in separate functions. As an example, the Christoffel matrix C_mat is defined as:

Figure 15:

Finally, as previously mentioned, the control input needed to ensure trajectory tracking is obtained using a PID controller on each of the robot joints as below:

Figure 16:

where the PIDController objects theta1_Cntrl and theta2_Cntrl evaluate the instance method PIDStep to compute the torque needed at the start of the simulation interval to ensure good tracking. The PIDStep method is implemented as:

Figure 17:

where obj is an instance of the PIDController class, procVar is the process variable, setPoint is the desired set point, dt is the sample time, err is the tracking error, Err is an approximation of the error integral, errDot is an approximation of the error derivative, u is the control input and uMin and uMax are the lower and upper bounds on the control input. The controller input bounds along with the controller parameters are chosen as before. In the Octave implementation, an object-oriented programming approach was implemented to keep the main program simple and to make the program modular. The simulation results are similar to the results presented in Subsection 3.1.

Modelica

Equation Mode

The equations for the two-link robot are in matrix form so this problem takes advantage of Modelica’s matrix facilities. The state variables are the two joint angles and the two joint angular velocities which are defined as:

Figure 18:

where D2R is the conversion factor from degrees to radians. As before, the start values are the system initial conditions. The M, C and G matrices are defined as:

Figure 19:

The equation section includes the basic matrix equations and the term-by-term computation of the elements of the matrices (these depend on the angles and angular velocities so cannot be treated as constants)

Figure 20:

Note that the first equation makes very clear that these are equations and not computing statements.

Two different methods of implementing feedback control are used in the Modelica: one case used an external C function for the PID control and the other used an internal Modelica class. All the feedback control was implemented as discrete-time control corresponding to typical computer-based control.

The control in this model was implemented using an external C language module. This shows how straightforward it is to interface C code with Modelica models.

To connect to a function in C, a function declaration is made in the first (definitions) section of the Modelica code:

Figure 21:

and

Figure 22:

for the initialization function and the function that operates each sample time. The downside of this facility is that the link to the C-file must be done using a fully qualified path, making the program no longer portable.

The control function can be called explicitly in the ‘algorithm’ section of the Modelica program. This section allows for algorithmic statements (that is, ordinary computing statements, conditionals, etc.). In this case, a sample-time, discrete control is setup by using a ‘when’ loop with sampling algorithm:

Figure 23:

The use of := for these algorithmic statements distinguishes them from equation statements which use the = sign.

As mentioned earlier, the path for the robot end-effector to take is broken into three parts:

  • move from the initial robot position (normally straight up) to the beginning of the “production” path,

  • hold (dwell) briefly at that position, and

  • follow a circular path.

The code to do this is in the algorithm section where the three dots (…) are, above. The inverse kinematics is coded, also as an algorithm, in a separate function, InvKin().

The path following code is:

Figure 24:

This simulation results are identical to the results presented in Subsection 3.1.

Graphical Modeling Mode

The double pendulum/two-link robot is made up of revolute joints and rigid body objects. Although this problem is posed as a two-dimensional problem (the equation-mode solutions use the two-dimensional solution), this model is actually a full three-dimensional model. By fixing the first revolute joint to a mechanical ground, it can only move in two-dimensions but, if instead it were attached to a moving object, such as a turntable, the three-dimensional dynamics would be fully accounted for. The model for the two-link robot is shown in Figure 25 .

https://s3-us-west-2.amazonaws.com/typeset-prod-media-server/23c985fb-3890-404d-a685-b75d990e3ab1image13.png
Figure 25: The 2-DOF robot (double pendulum) model in Modelica.

The initial condition for the arm is pointing straight up, i.e. joint 1 at 90° and joint 2 at 0°, as can be seen in Figure 26 .

https://s3-us-west-2.amazonaws.com/typeset-prod-media-server/23c985fb-3890-404d-a685-b75d990e3ab1image14.png
Figure 26: Initial configuration of the 2-DOF robot arm drawn in Modelica.

OpenModelica includes animation capabilities for mechanical system simulations. The animation for this problem can be found at the GitHub repository for the paper 25.

The Modelica model done in graphical mode uses a different means to implement the PID control. In this case, a Modelica class is defined for the PID. The algorithm is implemented in the same algorithm-when-sample structure used above:

Figure 27:

The wrinkle here is that this executes autonomously under internal control of Modelica. The ‘sample’ function sets up an internal event that is controlled by the Modelica execution module. The PID objects are defined in the ordinary manner:

Figure 28:

But the question is how to get data to/from then at the proper times. Modelica does have some synchronization facilities, but a simpler, although probably less efficient, manner was chosen here: put all of the interactions with the controller into the equation section, which operates “continuously”

Figure 29:

This assures that the controller has the most up-to-date input data (setpoint and process value) and the system simulation has the most up-to-date controller output (torque command).

The supervisory code for path generation is almost the same as the code used above, except that it too operates in the ‘equation’ section.

This case produces the same result as above, but also generates an animation, which can be found at 25.

Java

As noted above, the two-link robot equations are best re-organized for use by conventional ODE solvers. That version of the equations isolates the derivatives by inverting the ‘M’ matrix (which is 2×2, so easily inverted explicitly). This program is structured in the same way as the DC motor program, so the only section of interest is the ComputeDerivates() section. Java does not have any built-in support for matrices. Although matrix packages do exist, for this problem the matrix manipulations were written out explicitly (again, the maximum matrix size is 2×2). WARNING: in viewing this code note that in the matrix computations Java uses base-0 indexing while Modelica (and standard matrix notation) uses base-1 indexing. The computation of derivatives thus looks like

Figure 30:

The Java solution uses almost the same code as the Modelica solutions for PID control and for the path (supervisory) setpoint generation. The PID control is implemented as a full Java class, which makes it easier to work with than the equivalent solutions in Modelica. Otherwise, examination of the Java code shows very similar code sections to the Modelica code and, of course, produces the same result. Gnuplot is used for plotting and the corresponding script is included in the GitHub repository 25.

Gazebo/ROS

Gazebo is the default simulator used by Robot Operating System (ROS) developers both in academia and industry for simulation-based prototyping and evaluation. This is because Gazebo was designed with a robust integration to the ROS framework – enabling easy communication interface using standard ROS methods such as topics and services 26. Within the ROS framework, Gazebo can be used as a node which handles the physics-based interaction between rigid-bodies and the environment, as well as sensors, etc. In this section, the robot modeling, control, and ROS communication specifics to achieve the desired task for the two-DOF robot are discussed.

Robot Modeling

Robot models in Gazebo are defined by a tree structure of interconnected rigid bodies. The rigid body parameters are defined using XML-based formats such as Simulation Description Format (SDF) or Unified Robot Description Format (URDF). The URDF is native to ROS and thus is the more prominent of the two formats when operating in the ROS framework. The URDF provides definitions for the robot links (inertial properties, collision and visual properties), joints (kinematic and dynamic properties), transmission and, with added Gazebo tags, control plugins and geometric materials.

To create a new model, Gazebo provides a model editor which offers simple geometric shapes such as cylinders, spheres and cubes. One may compose the model graphically using the model editor or programmatically using the URDF. Gazebo also allows for custom 3D meshes to be imported. In this work, the two-link robot model (with a stand) was created in URDF using Gazebo-defined cylindrical shapes (see Figure 31 ) following the parameters defined in false .

https://s3-us-west-2.amazonaws.com/typeset-prod-media-server/23c985fb-3890-404d-a685-b75d990e3ab1image15.jpg
Figure 31: Two-DOF robot model spawned in Gazebo.

Robot Control

The most common way of implementing closed-loop control of a robot model in the Gazebo-ROS environment is via the ROS Control package 27. The ROS Control package is a set of controller plugins which processes joint state data and desired input data to determine expected output for actuation. The package provides various types of controllers: effort-based, position-based, velocity-based, state controller, etc. In this work, two types of effort-based controllers and one state controller are used for the task:

  • effort_controllers/JointPositionController: tracks individually commanded joint positions.

  • effort_controller/JointTrajectoryController: tracks commanded joint trajectories.

  • joint_state_controller/JointStateController: publishes the states of all joints in the model.

The effort-based controllers compute desired forces/torques to joints based on state of the model and desired behavior. These controllers are PID-based and their parameters are defined in a control configuration file.

Framework for simulating the trajectory tracking task

As described above, ROS operates using a node framework where each node is a distinct software program or process, communicating information (messages) with other nodes via topics. In this paper, five active nodes are adopted as graphically illustrated in Figure 32 These nodes include

  • Gazebo simulator (/gazebo): In ROS, the gazebo simulator is spawned as a node which handles all the processes of physics rendering, visualization, etc. This is an existing node in this framework. The created model (defined in an URDF file) is spawned in this simulator and interacts with its environment. The /gazebo node subscribes to controller commands to actuate the robot model and then publishes the state of the entire simulation, especially the robot states (/joint_states) continuously in simulation time.

  • Circle-drawing program (/draw_circle): This is a custom node written by the authors which follows the pseudo code in Subsection 3.5.4. The /draw_circle node is written in Python using the rospy package. It essentially initializes the circle parameters, computes the joint position or joint trajectory to track (using inverse kinematics) and publishes this as desired input to the respective controller command topics Robot state publisher (/robot_state_publisher): This is an existing ROS node which processes robot joint states to determine robot link/joint frame transformations. Thus, it publishes the robot frame transformation data on a topic called /tf 28.

  • Data recording program (/data_recorder): This is a program written to read robot data (joint states from /joint_states and robot link pose/transform from /tf) and arrange them into a convenient array for post-simulation analysis and storage. This node then publishes the arranged data onto a custom topic called /data_log.

  • Record data (/rosbag_record): This is a common existing tool which enables convenient recording and storage of data available in the ROS communication pipeline in a unique file called a rosbag 29.

Draw_circle pseudo code

  • Initialize the ROS node, subscribers and publisher objects

  • Initialize the circle parameters (radius, starting pos, durations: N1, N2

  • Compute the joint configuration (q_init for the circle starting position

  • Publish q_init to the joint_position_controller_command topic

  • Wait N1 seconds

  • Switch controller to a joint_position_controller to a joint_trajectory_controller

  • Compute the joint trajectory (q_traj for completing the circular path

  • Publish q_traj to the joint_trajectory_controller

  • Wait N2 seconds

  • End

https://s3-us-west-2.amazonaws.com/typeset-prod-media-server/23c985fb-3890-404d-a685-b75d990e3ab1image16.jpg
Figure 32: 9. Schematic of the interconnections between ROS nodes and topics.

Discussions

This two-part paper demonstrates the use of the OSS such as Python, GNU Octave, Modelica, Java, and Gazebo in the context of MRE education with some simulation showcases. These software offer numerous advantages such as free accessibility, customizability, wide online community support, etc. However, each of these platforms also has its own limitations and challenges. This section reviews some of the potentials and challenges of these software in the context of the problems introduced in this paper. Other challenges facing the applications of the OSS are listed in Part I of this paper, as documented through feedback from the community members. The goal of the paper is that the application showcases and review of the potentials and limitations of each OSS could allow MRE educators to make an informed judgment about the choice of a suitable software for their courses and consequently, facilitate a wider adoption of these software.

Python is a general-purpose programming language which is increasingly being used in various applications and industries and therefore, familiarizing students with Python programming can open up a lot of future opportunities for them. Python also has a large online support community which can be helpful for troubleshooting and debugging purposes. Despite the community shift towards use of Python 3, there are still some packages and applications, written/compatible with Python 2, which can cause confusion. As for developing and executing Python programs, although a combination of a text editor and command prompt can be used, freely available Integrated Development Environments (IDEs) such as Spyder 30 could provide a more straightforward interface for Python beginners. Installing Python packages could also be challenging at times. As an example, the easiest way to install and integrate Python Control Systems library with Spyder on Windows is through ‘pip’, despite various methods proposed online and outlined at 31. The choice of other operating systems could further complicate these issues. As for the application of Python in the context of MRE, Python Control System library and Matlab compatibility module can ease entry into using Python; however, there are still some discrepancies when it comes to generating, indexing, and slicing arrays and matrices and working with them within loops. Online tutorials and articles written by MRE professionals, such as this paper, could help bridge the gap between Python and Matlab. As for data visualization, although Python plots generated with matplotlib might not be as interactive as Matlab plots, the matplotlib library provides numerous options for customizing plots. In summary, Python and the vast collection of its packages would be feasible and beneficial solutions to integrate in MRE education.

The main advantage of using Octave is that it has strong compatibility with MATLAB which allows for greater portability and sharing of programs between the two platforms and nearly eliminates the time required to learn to use Octave with prior MATLAB experience. The open-source nature and availability of the source code also allows tinkerers to experiment, customize, and develop different features. Another advantage of Octave is that it allows for C-style auto-increment and assignment operators like i++ and ++i. It also allows for exponentiation using ^ and **. However, since Octave is not a commercial product it does not yet have all the same built-in functionality and toolbox capabilities as MATLAB, due to the resource and funding limitations. An example of this is the lack of LaTeX support to display equations on plots. Octave, however, does support a subset of TeX functionality which allows for the display of Greek characters, special glyphs, and mathematical symbols. Despite the existence of Octave Control package, it misses functionalities such as controlSystemDesigner (previously known as SISO tool) for control system design and analysis. Furthermore, Octave’s user interface and debugging tools are not as mature as MATLAB’s. Another limitation of Octave is that it does not have a Simulink environment or a graphical programming environment for simulating and analyzing multi-domain dynamical systems.

The enormous advantage in using Modelica is its focus on physical systems including a large number of libraries for simulating systems containing several energy media. For the MRE world, the core library is the Multibody Mechanics library. This library does three-dimensional, rigid-body dynamics, which, when combined with the Rotational Mechanics and Translational Mechanics libraries, can be used to model a wide variety of mechanical systems. The Electrical, Magnetic, Fluid, and Thermal libraries can interact with the Mechanics libraries to allow simulation of complete systems. In the examples above, the DC motor uses a combination of the Electrical and Rotational Mechanics libraries and the two-link robot uses the Multibody and Rotational libraries. The major disadvantages of Modelica are that it is an entirely different syntax and methodology to learn and its execution times can be slow. For example, there is a noticeable compile time for a small problem in OpenModelica as compared to a compile time that is too short to notice for Java. An institutional advantage to using Modelica is that while educational institutions can very effectively use the open-source OpenModelica version, students entering industry and research labs will often be able to easily transfer their skills to one of the commercial versions.

Java offers the opportunity of a well-structured object-oriented language with efficient execution and very good portability properties. These properties can be used to advantage when dealing with large, complex problems. The obvious disadvantage is that Java is a full-blown programming language so is most useful to people who spend a good part of their lives doing programming. Java does not have native support for advanced numerical mathematics, but packages such as Hipparchus, as used in the above examples, fill that gap nicely.

Compared to other platforms described above, Gazebo provides MRE instructors and students an avenue to evaluate the integration of a full-fledged robot system in simulation: from prototyping controllers, to simulating virtual actuators and sensors on a realistic virtual model of the physical robot. The level of abstraction enables students to learn about how integrated robot systems are designed in the real world. Gazebo is particularly well suited for this because of its native compatibility with ROS which is the most widely used middleware for robotics in research and industry. However, instructors and students new to robotics may genuinely struggle with the existing high technical overhead required to effectively use the software. For instance, Gazebo (integrally operated with ROS) most commonly operates on Linux OS (although Gazebo supports Windows) which may be less familiar especially to students. Also, although Gazebo provides model editors to create and modify simple robot models using a GUI, users often require knowledge of SDF and URDF to work with more complex robot models. This may also be a challenge for novice students and instructors.

Summary, Conclusions, and Future Work

This paper is the second part of the study focused on promoting the application of the OSS in MRE education. In this paper, a 2-DOF robot arm is used as a showcase to demonstrate the application of the OSS in the implementation of a PID controller to achieve trajectory tracking for the robot end-effector. Design and implementation of PID controllers are skills that every MRE graduate should master which, as shown in this paper, can easily be achieved with the OSS. Furthermore, such implementation can also expose the students to the development details of closed-loop control systems. Important code snippets are given and discussed in the paper and the full scripts are made available on the Github repository of the paper along with Matlab scripts, intended to serve as a point of comparison. This two-part paper can provide a comprehensive guide for the utilization of various OSS in simulation, analysis, and control design and implementation of MRE systems. MRE students, instructors, and professionals could choose one or more sections of this paper to learn the application of their software of choice in the design and development of MRE systems. This paper and similar works from MRE professionals can further promote the use of these software and enable the MRE community to reap the numerous benefits of the OSS.

References

The post Use of Open-source Software in Mechatronics and Robotics Engineering Education – Part II : Controller Implementation appeared first on ASEE Computers in Education Journal.

]]>