Qualifying Exams

Ever since I found out what qualifying exams were, I was absolutely terrified. I remember being an undergrad listening to the grad students from my research group and my TA sections talking about “that test you have to take after the first couple years where you can be tested on literally anything in your field and if you fail, you get kicked out of grad school lol” and, as someone with low to medium key test anxiety, it sounded like my personal kind of hell. Even after going through the grad school application process, my entire future rested on a few hours and a few pieces of paper?

Our written quals are subject-based. We have five core courses: Deterministic Models in Biology, Modeling in Biology: Structure, Function, and Evolution, Stochastic Modeling in Biology, Biomedical Data Analysis, and Computational Algorithms. The qualifying exams for those subjects are offered at the end of August each year. Each subject exam can be assigned a PhD pass, a Masters pass (slightly lower level), or no pass. In order to pass the overall comprehensive exams and remain in the program, a student must get at least three PhD level passes and one Masters level pass. Each student gets two tries to get the required number of passes.

It might seem like these qualifying exams are just like final exams, since, after all, they are single exams self-contained in just five 10 week courses, right? Wrong! What I quickly learned when I entered grad school was how much all of these courses built on years and years of knowledge from high school and undergrad mathematics, how much of this knowledge was assumed background knowledge that was required in order to even begin to comprehend any of the lectures. I realized how kind my undergrad professors and TAs had been in taking the time to rehash material from basic algebra 2, trigonometry, and differential equations in office hours in order to help us understand more difficult material. I missed the warm embrace of assumed ignorance, as my graduate school professors were surprised, disappointed, and in some cases even mortally offended if students showed the slightest sign of rustiness in material we should have learned in our undergrad probability theory courses, in our numerical linear algebra courses, and in our complex analysis courses. It was intimidating, to say the very least, and I certainly did not pick up all of the material from the lectures the first time around. Aside from reviewing all my undergrad course notes and textbooks and completing all the core course problem sets on time, there was so much to do and so much to learn during the quarter with research, preparing for group meetings, and neuroscience electives. Throughout the year, the prospect of qualifying exams seemed to be looming over my head. To the put it in the most graceful and delicate way possible, I was terrified because I didn’t know s***.

During the first part of the summer, before my San Diego Pride trip, I spent my days in the lab, partly working on research and partly reviewing and rewriting all my notes from the lectures and the textbooks. After the trip, after recovering for a few days, I collected myself (physically and emotionally) and collected all the books and notes from undergrad that I thought would be useful in order to decode the notes that I had spent the first half of the summer writing. First, I went through all the problem sets that I had already done during the quarter. Looking at the solutions I had written up (most of which I had forgotten by this point), I tried to recall the theorems from undergrad courses I had used, and the corresponding textbooks that would have more detailed information I could review. After finding these textbooks all around the various bookshelves in the house, I went through the sections I thought would be useful. Below is a stack of the textbooks that I used during this process.

IMG_2345

For me personally, I found that the most gaps in my knowledge were in probability and linear algebra, as my Stochastic Modeling and Computational Algorithms classes (both taught by the same professor) took a lot of the theorems and proofs I learned in those courses for granted.

Something I really came to appreciate through studying for these exams was the sheer intellectual brilliance of my professor who taught my Stochastic Modeling and Computations Algorithms courses. He had written textbooks for these courses, and I am ashamed to admit that during the classes, I had skipped over many of the proofs and examples in the book. A fourth-year student in my department and in my lab, one of the few people who entered my program with more of a biology background than a math background, shared some advice on passing this professor’s exams, for someone with less confidence in their mathematical abilities. “Read all the examples and proofs in the textbook. Make sure you can understand how he got to the conclusions. His books are very dense and compact and he skips a lot of steps. Make sure you know how to fill in the gaps.” This seemed like a daunting task, but this older student (bless his soul) also provided me with a 75 page stack of his notes on the textbook examples from the time when he was studying for quals three years ago, where he filled in the gaps, and I could use them as a reference in case I got stuck. For example, my professor used things like binomial theorem and Taylor expansion approximations to condense a lot of the equations, things that weren’t immediately obvious upon first glance. It was a daunting task, and I didn’t get through the textbooks cover to cover. But I got through a significant portion of the chapters that were more emphasized in the courses, and in the end, I felt like a stronger applied mathematician. My eye had gotten better at recognizing when to use these little tricks to simplify expressions and approximate.

One of the courses, the Structure, Function, and Evolution course, was taught by my own PI, which meant it would be important for me to pass this particular subject because I do want to remain in his lab. The interesting thing about his course is that it was not mathematically the most challenging, although there were some complicated PDEs there when we started talking about diffusion and population genetics. During the classes, when we had problem sets due, his office hours would be completely full with students from the course probing him what exactly he was trying to ask with the questions and trying to decode his convoluted wording. According to the older students in our program, the main difficulty about his exam was interpreting the questions. After looking at some of the past exams, I noticed some general themes, as he tended to ask questions that bridged concepts we learned earlier in the class, relating to network theory and geometry, and later concepts in population genetics.

Biomedical Data Analysis was one that I felt least prepared for, as during the class, we had focused a lot on using R to extract statistical parameters from datasets and fit models to the data, but not much on deriving statistical results. Particularly for me, since I did not have much of a statistics background in undergrad, I felt even more overwhelmed by the unfamiliar vocabulary that my professor assumed we had learned in kindergarten. I studied for this one by making a lot of use of statistics videos on YouTube, which proved to be more useful in aiding my conceptual understanding than our textbook. In addition, our professor was kind enough to host a review session at the beginning of August, which clarified some of the confusion I had.

The older students in our department had told us that very few students in our department had obtained PhD level passes on one of the exams, the Deterministic Models in Biology Course. It was taught by a notoriously tough professor with a background in Physics and a joint appointment in the Mathematics department, and many of the homework problems he had assigned didn’t even have analytical solutions. Since my main goal was to stay in the program and I only had a couple months to study for these exams, I followed the advice of the older students and spent the majority of my time on the other subjects, leaving only a couple of days to study for that one.

Before August, I had spent most of my time studying at home, since I did not want to lug all my undergrad textbooks around on campus. However, I believe that in preparation for exams as intense as these ones, it can be very helpful to study with others to get some feedback and test understanding. During August, I spent a lot of my time studying with my classmate Janet* (name has been changed for privacy). She is the only other PhD student in my year and already has a medical degree and had studied in China, where early math education was much more advanced, so I definitely worried that our study groups would end up being her incredible brain carrying a lot of my dead weight. I had declared my major in math late, and didn’t know what proof by induction was until the latter half of my second to last year of college. Meanwhile, she already knew how to apply proof by induction while crawling out of the womb (okay, this *might* be a *slight* exaggeration, but truly not that far off). However, I think we got a really good, productive, mutually beneficial flow going when we were studying together in August.

At first, I spent the whole day studying in the office, but after a while, I realized that being around the older students stressed me out more than it helped. One of my pet peeves was when they would try to quiz me on random facts from some of the courses, shouting at me things like “Hey, quick, under what conditions can you add the powers when multiplying matrix exponential? When the power matrices commute, duh! Those are easy points you’re missing!” Of course, I didn’t know how to conjure these facts on the spot, but I felt that I did know more than it seemed from my blank looks, because after thinking about it for a moment, I could even conjure a proof for that fact. Although these students were well-intentioned, I knew what worked best for me, and it was not being holed up in the office all day, subject to this stressful banter that left me feeling discouraged about my prospects for the exams.

This roadblock turned out to be a blessing in disguise, because I soon fell into the easy routine of going to the campus at around 7 am, spending the day reviewing past exams in the Biomedical Library until 3 in the afternoon. From around 3-5 pm, I would go up to the office to discuss these exams with Janet. I found that though she helped a lot with the more probability theory related problems, I was also able to help her a lot with my PI’s convoluted wording in his past exams due to the language barrier. Plus, I felt that after working for my PI for almost a year, I got a sense of how his brain worked and the kinds of questions he was asking. I was glad that I could contribute to these study sessions as well as gain from them. I think this process of studying improved my work ethic and anxiety management, forced me to review individual undergrad courses and bring them together in ways that I didn’t know existed, and improved my confidence in problem-solving. Something I think about a lot is how in undergrad, I took a variety of applied math courses, but learned about mathematics mostly from a theoretical perspective without truly understanding how to apply what I learned to research. I think that the process of studying for these five courses in-depth helped me understand not only what mathematical tools are available, but how to use them in real biomedical problems and why they’re important.

Finally, at the end of August, the exams began. We had three, spaced out days of exams. The first two days, we had two exams each. The first day was Stochastic Modeling in the morning and Computational Algorithms in the afternoon. The second day was Structure, Function, and Evolution in the morning and Biomedical Data Analysis in the afternoon. The last day was just Deterministic Models in the afternoon. For each session, we got a 30 minute reading period, where we could read the exam questions and ask the professors for any clarification about the wording of the questions. Then, the three of us were split off into three separate rooms on the floor. I was assigned the classroom where most of our courses had occurred, which was encouraging because I had read research claiming that recall of material during exams can be enhanced if the exam takes place in the same room where learning occurred (to be fair, though, most of my learning had occurred during the summer at home, in the Biomedical Library, and in the office rather than the classroom). We were allowed to eat and drink during the exams, and the older students were very nice and brought us chocolates and water the day of our first exam.

I will admit that after every single exam, I felt terrible and slightly violated, although none more than the last exam, for which I didn’t even finish half of the questions. The good thing is that for a lot of the exams, it was not necessary to answer all of the questions to completion get a PhD pass; it was more important to show how we are thinking – something I had been trained to do since my elementary school math (“show your work!” is permanently etched in my brain).

After the exams, I took a yoga class with one of my college friends, ran a lot, swam a lot, bought all my textbooks, binders, notebook paper, and replenished pencils for my fall classes, worked on my poster for a quantitative and computational biology retreat where I’m presenting at the end of September, and went to a Diversity in STEM Conference in Irvine where I got to catch up with a friend who is a PhD student there. It was busy, but I needed to keep busy so I wouldn’t keep thinking about my anxiety about the results.

A week later, much earlier than I was expecting, I got the results: I got a PhD level pass in all the exams except Deterministic Models in Biology – I got no pass in that subject. I learned that Janet also got no pass in that exam, and since she’s one of the smartest people I know, in a twisted way, it made me feel a little validated that it’s not like only dumb people get “no pass” or something! (I’m saying this slightly in jest, as I do recognize it as a toxic thought, but it will take some more time to train myself to not have these thoughts.) I will be taking a 2 quarter sequence in Mathematical Physics in the Physics department this coming year, so hopefully, I will fill some of the gaps in my knowledge on that side of Biomathematics. Overall, I’m pretty happy with my results in all the other classes, thrilled that I get to stay in the program and continue working on the project I’ve been working on, looking forward to my last year of courses – all very interesting elective courses I chose because of their relevance to my research – and very much looking forward to meeting all the new grad students in my department (there are five, and mostly other women, by the way, which makes me happy).

This coming Monday is our Department Orientation for the new students, and at noon, there is a potluck where everyone from the department meets the new students. I remember last year, when I was a first-year coming into the department, the Vice Chair announced that both the two second years had passed their qualifying exams. It might seem silly, but during my pre-exam anxiety and habitual catastrophic thinking, I remember thinking about how that if I didn’t pass my quals, it would be announced to all the new entering students, and then I would have to go through this same process again with the first-years next summer. I’m very relieved this will not be the case.

Overall, although I know I will probably forget most of what I learned during this summer, it was helpful for me to have a broad idea about the vast breadth of tools in applied mathematics – knowledge that I will be building on this year in my applied math and physics elective courses. I might not remember the details of how to solve every type of problem by hand, but generally knowing what kind of tools are available, I believe, will make me more informed and better able to come up with ideas to tackle new problems in my research in the years to come. The details are things that I can learn on the fly, as needed.

April 2019 Grad School Life Updates

I originally planned to update this blog every week or so during school, but as soon as the quarter started, things got super busy and it was easy to put this off. Hopefully, I will be better about it this quarter!

To give some background, ever since I started thinking about applying to grad programs, I knew that I wanted to come to my school and program, Biomathematics. I did a lot of research on different aspects of the programs, and even more after I was invited to the interview weekends. I chose this place based on a lot of factors, including academic fit, future goals, advisors, general feel of the program, location, and LGBTQ+ friendliness of the campus.

The program has been wonderful so far and has even surpassed my expectations. It is a pretty tiny program, only 15 grad students total, so the classes are very small and everyone in the program knows each other. Every Thursday, the grad students, some of the students who work for our professors but are from neighboring departments such as Math and Biostatistics, and postdocs all go to a nearby bar, Barney’s, for “pub night”, where they basically drink beer, spill (metaphorical) tea, and relieve stress. In my experience with the students, they have all been incredibly helpful, friendly, and inclusive. I have been careful about sharing personal information with them and thus have only come out to one person in my program so far. I hope that I can make closer friendships with the other students over time.

IMG_0054

Grad school classes have been an adjustment in a lot of ways. On one hand, there is a lot more material covered per class – there have been times when the entirety of a math class I had taken in undergrad was covered in just two lectures. Moreover, it is impossible to get all the required background simply from attending class, and it’s necessary to do a lot of extra reading. One thing that has been surprising for me is that in my program’s core courses, as well as the neuroscience course I took, it didn’t seem as difficult as it was in undergrad to get good grades (despite the material being a lot more daunting). I think this is probably because in undergrad, there were more tricks on exams that were designed to weed people out, and now, the focus is on learning, asking questions that may or may not have answers, and being self-motivated to seek extra references for more information, but we aren’t being directly or comparatively evaluated for those things.

Another difference is that there is a lot more emphasis on reading papers and critical thinking, such as proposing potential experiments or critically examining the presentation of data and results in published papers. Some of my core biomathematics courses had homework problems that had no analytic solutions, or that there were multiple possible approaches, and the professors just wanted to see us come up with ideas, defend our assumptions, and solve as far as analytically (or numerically) possible. This is obviously quite different from undergraduate mathematics or chemistry classes, where there are standard solutions to most classical problems either in the back of the book or somewhere on the internet! But I suppose it is moving more reflective of problems in research that have not been previously solved.

I have particularly enjoyed the aspect of courses that involve choosing papers to review for final presentations, and it has allowed me to explore applications of mathematics and computation to neuroscience and has made me more excited about research. When I was in undergrad, although I studied in a theoretical physics group that looked at neuron dynamics, I wasn’t sure if I was doing it only because that was the main opportunity that came my way, but not out of real passion. I think I was too stressed about the prospect of grad school at the time to really develop my passion in research. However, I have always found myself drawn to related topics for class projects and during our department seminars. Biomathematics is a broad field, and I was originally considering exploring the statistical genetics route that is popular in my department, but after starting here, I think that my interests truly lie in neuroscience and mathematical physics, and I am now much more certain in choosing my research focuses and courses.

My department has many course requirements (4 core biomath courses, 2 biomath electives, 6 applied math courses, and 6 biology courses), and as a result, unlike some of the more experimentally focused departments like biology and engineering, they encourage us to focus on coursework and passing the qualifying exams during the first year. We don’t have official research rotations, and we don’t have to decide on an advisor until the end of the second year. However, all of my classmates have started working with potential advisors.

Although I unofficially attended research meetings in fall quarter, this winter quarter was my first official quarter of directed research. At the same time, one of my core courses was taught by my potential advisor (or PI, although my friends who are not in science keep thinking I mean “private investigator” when I use that term). He was an amazing lecturer; he wasn’t the kind of professor who continuously spews information while we furiously try to scribble everything down, but he led us to certain ideas by asking questions. One thing I really like about working with him, both through the course and during the research meetings and updates, is that although his work is clearly mathematically oriented (his background is in particle physics – interestingly, just like my PI in undergrad), unlike a lot of mathematicians and physicists, he has a very conceptual and biologically relevant approach. Some people in our program prefer more mathematical rigor, but for me, it seemed to be a perfect blend.

My advisor has done a lot of previous work on cardiovascular networks and the scaling of radius and length of individual vessels across levels of the network. I came to visit him before applying to the program, and when I told him that I was interested in neuroscience, he said that he could imagine the possibility of applying the same methods of analysis to study neuronal networks. Since I came to the group, I have been working on formulating this problem, solving for scaling ratios using Lagrange multipliers (more details about this method in my First Quarter Research Progress post), and analyzing data, both from images and quantitative data from 3D reconstructions of neurons. I have reformulated this problem so that instead of minimizing the power loss due to dissipation, I am minimizing conduction time. For neurons, one of the major evolutionary driving factors is the speed in conducting signals. For example, if you touch something hot like a stove, it would be helpful to have this sensory information relayed as soon as possible so you can pull your hand away before burning it! I have also been reading some papers from the fifties about conduction velocity in neurons and the effects of myelination (fatty layers that provide insulation for nerve fibers) on this speed, and have recently incorporated the degree of myelination as a parameter. I am also looking to modify the space filling constraint to fit neuronal systems, but I am not quite sure how to do this yet. Taking neuroscience courses concurrently with this project is helpful because sometimes I will get random ideas from class that I might be able to translate to math in a way that I can incorporate it into my model. Sometimes, I watch videos of talks by researchers in biology about dendritic morphology and structural neuroscience and feel somewhat overwhelmed, because I am obviously making a lot of simplifying assumptions and not taking into consideration factors such as genetic influences.

Overall, although research is messy and involves a lot of seeking information from various fields, as well as catching up on basic electrodynamics, fluid mechanics, and neuroscience that I never learned in a class, I am enjoying it a lot. This is my first time having my own project, as in undergrad I was for the most part working as a minion, completing menial coding tasks for grad students’ projects. My office mate in my undergrad research group, now a fourth-year grad student in the same group, came to visit me over spring break and told me I seemed a lot more confident than I was last year. Which is strange to me because I feel more overwhelmed and confused the more I learn! I suppose the “confidence” might come from accepting that I don’t know everything, or even a lot, and I’m more comfortable with being uncomfortable, if that makes any sense at all.

As I anticipated, making friends has been quite difficult for me in grad school. It was especially difficult in fall quarter, when I avoided going to LGBTQ+ specific events out of fear of the unknown, mostly, and just went to the weekly department pub nights every now and then, and spent the rest of my time shut up in my own room. My department mates are wonderful and lovely, but aside from the fact that I am not hugely into drinking, the conversations were centered around heteronormative romantic experiences, and I found myself feeling isolated a lot of the time – especially since I’m not out to most of them. When I talked to my mom about it over winter break, she suggested that I add queer org meetings to my schedule rigidly, with the same priority as classes, just so that I could feel more of a sense of community. I decided that this was a good idea, as mental health is an important thing to commit to.

In winter quarter, I regularly attended two queer orgs. One of these is called QSTEM, or Queers in STEM. It was founded by a second year PhD student in Geochemistry who identifies as a gay man. This org is mostly other graduate students, and the vast majority of them are men, which is not entirely unexpected. I have enjoyed participating in social events such as board game nights and ice cream socials. They also have a lot of outreach opportunities, which I hope I have time to get more involved in as my courses finish up and some time is freed up.

The second org I attended was called Queer Girl, and is only open to women and non-binary people. I was the only one there who wasn’t an undergrad, but was a nice social space to discuss things like queer representation in media (or the lack thereof, especially when it comes to women) – it gave me the opportunity to talk about Shay Mitchell in Pretty Little Liars and a random Korean webtoon I found called “Fluttering Feelings.” There’s definitely a lot I could learn from these women, as they would talk about their sexuality openly, which is something I’ve never been comfortable doing. Being around other women like me helped normalize my experiences a little. One of the coordinators of the group was a fellow Asian woman from San Diego (when I went to undergrad), and it was nice to meet someone I could vent to about missing San Diego and people always assuming we’re straight (being Asian/South Asian and having long hair is a surefire way to convince everyone you’re straight).

One of the social events in this club was a trip to Cuties Coffee, a queer owned and themed coffee shop in East Los Angeles that is designed to be a daytime, sober space for queer socialization and an alternative to the gay bars in West Hollywood. I loved visiting this place so much that I have now made it part of my weekend routine – I go there from around noon to four almost every Saturday to either study for classes or work on coding for research. I have included a picture from that day, and used the rainbow pride flag emojis to cover faces for the privacy of the other org members.

IMG_0079

I can’t stress how important it has been for me to have a queer sober space to go to, as I would say I’m pretty far on the introverted side of the spectrum and I never quite feel comfortable meeting new people in bars or nightclubs. (I still mostly keep to myself, drink my coffee/tea, and study during my trips to Cuties, but I hope I will cross the barrier of talking to strangers soon!). At the beginning of winter quarter, I went to West Hollywood a few times to check out the gay bars and nightclubs. Although I love walking on the main strip in West Hollywood, and enjoyed the experience to some extent, it’s not ideal for me because 1) the bars and clubs are largely catered towards gay men – Wednesdays are the only nights specifically for women, and there are no specific clubs for women, and 2) for some reason, being in these spaces where I’m (theoretically) approaching random strangers who are making snap judgments and impressions about me solely based on my physical appearance spiked some of my body insecurities, and to be honest, that’s not a headspace I want (or need) to be in. Right now, the focus for me is on meeting new queer friends and building community, and I’m grateful for these multiple sober spaces I have had access to this quarter.

Another extracurricular activity I participated in this winter was a club that does educational outreach in the form of presenting posters about various neuroscience to elementary through high school students to get them excited about learning about the brain. I was part of this Committee called Project Glia, which is responsible for designing and creating posters. I really wanted a way to keep in touch with my art – it can be extremely cathartic and rewarding, and I also want to catch up on the neuroscience background I never had in undergrad for my research, so this was the perfect opportunity for me. I designed this poster for “Music and the Brain”, and I was working with two undergrads who did a lot of the neat typography and shading. The director of Project Glia is a senior undergrad who happens to be taking one of my current graduate neurosciences classes with me, The Biology of Learning and Memory.

0-2

One thing I found strange in participating in these activities is that sometimes the undergrads I interact with seem to look up to me in a way, or think that I know things because I am a grad student. One of the students was talking to me the other day about imposter syndrome and comparing yourself to other people, and I ended up saying something along the lines of “Oh, I totally understand that feeling because I used to do that too. But honestly, you can drive yourself crazy comparing yourself to other people – I know because I have done it too, but I realized it was no longer serving me, and I realized I don’t have to be this ‘star student’ to still enjoy what I’m doing.” “That’s SO true,” she had responded sincerely, and meanwhile I was internally panicking during this entire interaction. It was different than listening to a friend, someone who considers me a colleague, and I was suddenly aware of the power dynamic and how much responsibility I had. I think because I’m currently a woman in a grad school program in a related field, some of these women who have goals of grad or med school see me as a sort of safe person to vent to who knows what it’s like to go through this kind of application process and how demoralizing it can be. I was quite nervous about saying the right thing, and having the right mix of relatability and encouragement – all without sounding too preachy or pretentious. When I talked about this later at pub night later with a sixth-year in my program, someone who has significant teaching experience, he reiterated that I have the power to reduce these young women’s imposter syndrome in STEM simply by listening to them and encouraging them. Which is exciting, but also intimidating, because just a year ago, I was that undergrad.

Anyways, that is the (long-winded) gist of the updates of my grad school life over the past quarter. I have some ideas for future, more focused posts, but hope to update more often with these topics as they come up! Until then, I have an exam coming up in my cell neurobiology course, a data analysis assignment, and a research presentation coming up. Wish me luck!

First Quarter Research Progress and Ideas

To be honest, I spent most of my first quarter of graduate school on classes, seminars, and getting adjusted to the new environment. However, I did start attending research meetings in a group I am interested in, and I have some ideas for a potential project. I am very excited about beginning this project, and I hope that this coming quarter, I will be able to make more progress. Luckily, there is a postdoc in the group who is also excited about it, and he has been very thorough in providing me with papers to read and feedback on my work. I will begin to describe my progress briefly.

The group I have been working in studies a wide range of systems such as predator-prey dynamics, multi-drug interaction, the relationship between sleep and metabolic rate, and cardiovascular networks. Since there are so many diverse projects happening in our group, our group meetings are split by topic. The sub-group I joined focuses on networks. So far, they have been mostly focusing on cardiovascular networks. They develop models that describe these networks, such as the scaling laws that describe changes in the radius and length of vessels across levels of the network. Then, they test these models against data extracted from 3D images.

IMG_9207

Since my primary interest in biology is in neuroscience, I approached the group to find out if there were any projects in neuroscience. The PI told me that although there are currently no projects in neuroscience in this group, there are mathematical similarities between neuronal networks and cardiovascular networks, and he saw a future in extending the image analysis of cardiovascular networks to neurons.

We can think of a network of neurons like the cardiovascular system, a resource distribution network that is subject to biological and physical constraints.  Deriving a power law relationship between radius and length of successive levels of a vascular network relies on minimizing the power lost due to dissipation while maintaining the assumptions that the network is of a fixed size, a fixed volume, and space filling. This calculation is carried out using the method of Lagrange multipliers, and assuming that the flow rate is constant. The power loss due to dissipation in the cardiovascular network is P = \dot{Q_0}^2 Z_{net}, where \dot{Q_0} is the volume flow rate of blood and Z_{net} is the resistance to blood flow in the network. For a neuronal network, we will use an analogous equation, P = I_0^2 R_{net}, where I_0 is the current, and R_{net} is the resistance to current flow in the network. We will carry out the Lagrange multiplier calculations in a similar fashion to the calculations for cardiovascular networks.

For cardiovascular networks, we use the Poiseuille formula for resistance, which is the hydrodynamic resistance to blood flow in the network. According to this formula, the impedance at a level k in the network is given by Z_k = \frac{8 \mu l_k}{\pi r_k^4}. We can reduce \frac{8 \mu}{\pi} to a single constant C, so this is equivalent to Cl_k r_k^{-4}. Thus, the resistance is proportional to the product of some powers of the length and the radius. If we want to consider a general formula for the resistance, we can consider a formula with powers p and q of of length and radius respectively. That is, our resistance formula at level k is R_k = \Tilde{C} l_k^p r_k^q.

We define the objective function as follows:

    \[P = I_0^2 R_{net} + \lambda V + \lambda_M M + \sum_{k=0}^{N} \lambda_k n^k l_k^3 \]

This objective function arises from the fact we want to minimize power loss, the first term, while imposing the three constraints that correspond to the last three terms: size, volume, and space filling. Each constraint corresponds to a Lagrange multiplier. The last constraint comes from the fact that a resource distribution network must feed every cell in the body. This, each branch at the end of the network feeds a group of cells called the service volume, v_N, where N is the terminal level, and the number of vessels at that level is N_N, so the total volume of living tissue is V_{tot} = N_N v_N. If we assume that this argument holds over all network levels, we have N_N v_N = N_{N-1} v_{N-1} = ... = N_0 v_0. We assume that the service volumes vary in proportion to l_k^3, so the total volume is proportional to N_kl_k^3. Our objective function has N terms related to space filling, since the space filling constraint must be satisfied at each level k. We assume that the branching ratio is constant, so the number of vessels at level k is n^k. We can define the volume as \sum_{k=0}^N N_k \pi r_k^2 l_k.

Note that we are defining the constraints the same we we did for vascular networks, but it is unclear whether these assumptions are accurate for neuronal networks. However, for the sake at arriving at a preliminary theoretical result for the scaling of neuronal networks, we will keep constraints.
The total resistance at each level is the resistance for a single vessel divided by the total number of vessels, that is, R_{k, tot} = \frac{\Tilde{C} l_k^p r_k^q}{n^k}. The net resistance of the network is the sum of the resistances at each level, so R_{net} = \sum_{k = 0}^N \frac{\Tilde{C} l_k^p r_k^q}{n^k}. If we define new Lagrange multipliers, \lambda' = \pi \lambda, we can rewrite the objective function as follows:

    \[P = I_0^2 \sum_{k = 0}^N \frac{\Tilde{C} l_k^p r_k^q}{n^k} + \lambda' \sum_{k=0}^N n^k r_k^2 l_k + \lambda'_M M + \sum_{k=0}^{N} \lambda'_k n^k l_k^3 \]

To normalize further, we can divide by the constant I_0^2\Tilde{C}, since the current is constant, and absorbing this constant into new definitions of the Lagrange multipliers, we get:

    \[P = \sum_{k = 0}^N \frac{l_k^p r_k^q}{n^k} + \Tilde{\lambda} \sum_{k=0}^N n^k r_k^2 l_k + \Tilde{\lambda}_M M + \sum_{k=0}^{N} \Tilde{\lambda}_k n^k l_k^3 \]

To find the radius scaling ratio, we will minimize P with respect to r^k, at an arbitrary level k, and set the result to 0. Thus, we can find a formula for a Lagrange multiplier and derive the scaling law.

So we have:

    \[\frac{dP}{dr_k} = \frac{l_k^p qr_k^{q-1}}{n^k} + 2 \Tilde{\lambda} n^k r_k l_k = 0 \]

Solving for the Lagrange multiplier, we have:

    \[\Tilde{\lambda} = -\frac{qr_k^{q-1}l_k^p}{2n^{2k} r_k l_k} = \frac{\frac{-q}{2}}{n^{2k}l_k^{1-p}r_k^{2-q}}\]

Since this is a constant, the denominator must be constant across levels. So

    \[\frac{n^{2(k+1)}l_{k+1}^{1-p}r_{k+1}^{2-q}}{n^{2k}l_{k}^{1-p}r_{k}^{2-q}} = 1\]

It is useful to consider the case where the resistance is related to the length linearly, that is, for p =1. Thus, we obtain the scaling ratio:

    \[\frac{n^{2(k+1)}r_{k+1}^{2-q}} {n^{2k}r_{k}^{2-q}} = 1 \rightarrow \frac{r_{k+1}}{r_k} = n^{\frac{-2}{2-q}}\]

To find the length scaling ratio, we will minimize P with respect to l^k, at an arbitrary level k, and set the result to 0. Thus, we can find a formula for a Lagrange multiplier, using the formula above, and derive the scaling law.

So we have:

    \[\frac{dP}{dl_k} = \frac{pl_k^{p-1}r_k^{q}}{n^k} + \Tilde{\lambda} n^k r_k^2 + 3\Tilde{\lambda_k} n^k l_k^2 = 0 \]

Solving for the Lagrange multiplier, we have:

    \[\Tilde{\lambda_k} = \frac{-\frac{pl_k^{p-1}r_k^{q}}{n^k} - \Tilde{\lambda} n^k r_k^2}{3n^k l_k^2}\]

Substituting \Tilde{\lambda}, as calculated before:

    \[\Tilde{\lambda_k} = \frac{-\frac{pl_k^{p-1}r_k^{q}}{n^k} + \frac{q r_k^2}{2n^{k}l_k^{1-p}r_k^{2-q}} }{3n^k l_k^2} = \frac{(\frac{q}{2} - p)pr_k^q l_k^{p-1}}{3n^{2k} l_k^2} = \frac{q-2p}{6} \frac{1}{n^{2k}l_k^{3-p}r_k^{-q}}\]

Since this is a constant, the denominator must be constant across levels. So

    \[\frac{n^{2(k+1)}l_{k+1}^{3-p}r_{k+1}^{-q}}{n^{2k}l_{k}^{3-p}r_{k}^{-q}} = 1\]

In the case where p=1, we have

    \[ \frac{n^{2(k+1)}l_{k+1}^{2}r_{k+1}^{-q}}{n^{2k}l_{k}^{2}r_{k}^{-q}} = 1\rightarrow (\frac{l_{k+1}}{l_k})^2 = n^{-2} (\frac{r_{k+1}}{r_k})^q\]

Substituting the scaling law for radius, we have:

    \[ (\frac{l_{k+1}}{l_k})^2 = n^{-2} (n^{\frac{-2}{2-q}})^q \rightarrow \frac{l_{k+1}}{l_k} = n^{-1 - \frac{q}{2-q}} \rightarrow \frac{l_{k+1}}{l_k} = n^{\frac{-2}{2-q}} \]

We can test these calculations for our vascular networks calculation, where q = -4. Our scaling laws for radius and length are \frac{r_{k+1}}{r_k} = \frac{l_{k+1}}{l_k} = n^{-1/3}, as expected.

We will now attempt to repeat these calculations using a resistance formula specific to neuronal networks.

We think of the resistance to blood flow as the resistance due to the viscosity of the fluid. For neuronal networks, we can think of axons and dendrites as wires through which current is flowing. The resistance as the resistance to current flow through the “wire” due to intrinsic properties of the wire. The resistance is given by R_k = \frac{\rho l_k }{A}, where A is the cross-sectional area of the wire, and l_k is the length of the segment at that level. \rho is the intrinsic resistivity of the axon or dendrite, and we are assuming that \rho is constant, meaning that the material is uniform. If we assume that the axons or dendrites are cylindrical, we can define the cross-sectional area as \pi r_k^2 for level k, so the resistance for level k is given by R_k = \frac{\rho l_k }{\pi r_k^2}.

Assuming that the branching ratio is constant, the number of branches at each level is n^k, and the total resistance at each level is R_{k,tot} = \frac{\rho l_k }{\pi r_k^2 n^k}. The net resistance is the sum across all levels, that is R_{net} = \sum_{k=0}^N\frac{\rho l_k }{\pi r_k^2 n^k}.

Our objective function for this case can be derived in a similar manner as in the general case, setting \Tilde{C} = \frac{\rho}{\pi}, setting p = 1, and q = -2, based on the constants and powers for our specific resistance equation. Thus, we have the objective function

    \[P = \sum_{k = 0}^N \frac{l_k}{r_k^2 n^k} + \Tilde{\lambda} \sum_{k=0}^N n_k r_k^2 l_k + \Tilde{\lambda}_M M + \sum_{k=0}^{N} \Tilde{\lambda}_k n^k l_k^3 \]

To find the radius scaling ratio, we will minimize P with respect to r^k, at an arbitrary level k, and set the result to 0. Thus, we can find a formula for a Lagrange multiplier and derive the scaling law.

So we have:

    \[\frac{dP}{dr_k} = \frac{-2l_k}{n^k r_k^3} + 2 \Tilde{\lambda} n^k r_k l_k = 0 \]

Solving for the Lagrange multiplier, we have:

    \[\Tilde{\lambda} = \frac{1}{n^{2k}r_k^{4}}\]

Since this is a constant, the denominator must be constant across levels. So

    \[\frac{n^{2(k+1)}r_{k+1}^{4}}{n^{2k}r_{k}^{4}} = 1\]

Thus, we can solve for the scaling ratio:

    \[ \frac{r_{k+1}}{r_k} = (n^{-2})^{1/4} = n^{-1/2}\]

To find the length scaling ratio, we will minimize P with respect to l^k, at an arbitrary level k, and set the result to 0. Thus, we can find a formula for a Lagrange multiplier, using the formula above, and derive the scaling law.

So we have:

    \[\frac{dP}{dl_k} = \frac{1}{n^k r_k^2} + \Tilde{\lambda} n^k r_k^2 + 3\Tilde{\lambda_k} n^k l_k^2 = 0 \]

Solving for the Lagrange multiplier, we have:

    \[\Tilde{\lambda_k} = \frac{-\frac{1}{n^k r_k^2} - \Tilde{\lambda} n^k r_k^2}{3n^k l_k^2}\]

Substituting \Tilde{\lambda}, as calculated before:

    \[\Tilde{\lambda_k} = \frac{-\frac{1}{n^k r_k^2} - \frac{1}{n^{k}r_k^{2}} }{3n^k l_k^2} = - \frac{2}{3n^{2k}l_k^2 r_k^2}\]

Since this is a constant, the denominator must be constant across levels. So

    \[\frac{n^{2(k+1)}l_{k+1}^{2}r_{k+1}^{2}}{n^{2k}l_{k}^{2}r_{k}^{2}} = 1\]

Thus, substituting in the scaling ratio for radius, we can solve for the scaling ratio for length:

    \[(\frac{l_{k+1}}{l_k})^2 = n^{-2} (\frac{r_{k+1}}{r_k})^{-2} = n^{-2} (n^{-1/2})^{-2} = n^{-1} \rightarrow \frac{l_{k+1}}{l_k} = n^{-1/2}\]

Note that these scaling laws are consistent for the theoretical predictions from our general formulas, for q = -2.

Some of the assumptions we have made for the purpose of these calculations are as follows:

  • The current flow is constant across all levels of the network
  • The axons and dendrites are cylindrical
  • The material of the axons and dendrites is uniform and can be linked to a constant of specific resistivity
  • The network has a fixed size
  • The network is contained within a fixed volume
  • The network is space filling
  • The branching ratio is constant

Particularly in the case of the volume and space-filling constraints, and the constant branching ratio, it is unclear if a neuronal network has the same properties that we assume hold for vascular networks. In addition, it is unclear whether it is reasonable to assume that the current flow is constant. Thus, it might be worth reexamining these constraints and assumptions to add more biologically realistic and relevant ones.

Moreover, instead of focusing on this optimization problem of minimizing power loss, it might be more fruitful to examine a different optimization problem, such as minimizing the time for a signal to travel from one end to another end of the network.

These scaling laws give us some preliminary ideas to work with. We can try using image analysis techniques to measure length and radii of segments of axons and dendrites across levels in images and see whether information extracted from the data supports our theoretical conclusions.

 

References

Savage, Van M., Deeds, Eric J., Fontana, Walter. (2008). Sizing up Allometric Scaling Theory. PLOS Computational Biology.

Johnston, Daniel, Wu, Samuel Miao-Sin . (2001). Foundations of Cellular Neurophysiology. MIT Press.

Network Dynamics, Biophysics, and Mental Illness

This past fall was my first quarter of graduate school, and one of our core courses was Deterministic Models in Biology. For our final project, we chose a quantitative biology paper on a topic of our interest and presented on it to the class. The paper I chose was a review paper, Psychiatric Illnesses as Disorders of Network Dynamics by Daniel Durstewitz, Quentin J.M. Huys, and Georgia Koppe. My undergraduate research focused on the dynamics of neurons at the molecular level, and this paper helped me connect it to specific characteristics of mental illnesses.

This paper proposes that since observable cognitive and emotional states rely on the underlying dynamics of neuronal networks, we should use Dynamical Systems Theory (DST) to characterize, diagnose, and develop therapeutic strategies for mental illness.

The central idea of DST is that there is a set of differential equations that evolve in time. A set of dynamical equations could look as follows:

    \[\frac{dx_1}{dt} = \dot{x_1} = f_1(x_1, ... , x_M, t; \boldsymbol{\theta} )\]

    \[\frac{dx_2}{dt} = \dot{x_2} = f_1(x_1, ... , x_M, t; \boldsymbol{\theta})\]

    \[\vdots \]

    \[\frac{dx_M}{dt} = \dot{x_M} = f_M(x_1, ... , x_M, t; \boldsymbol{\theta})\]

The variables x_1, x_2, ... x_M represent the dynamical variables such as voltage or neural firing rate. These equations describe how each of these variables change over time. \boldsymbol{\theta} represents parameters, fixed values that are properties of the system that do not change over time.

We define a fixed point as the point at which the derivatives of all of the variables are equal to 0. Fixed points are stable if activity converges towards them, and unstable if activity diverges from them. Stable fixed points are called attractors. We can define the basin of attraction as the set of points from which activity converges towards the attractor.

The figure below shows an example of a phase plane, a representation of a space spanned by the two variables of a system. Note that it is possible to use dimensionality reduction methods to obtain visual representations for higher dimensional systems. The arrows show the activity of the system. The blue and orange curves represent nullclines, and along each of these curves, the derivative of one of the variables is 0. The green line represents the barrier between the two basins of attractions. It is possible to cross over this barrier as a result of either external influences or random fluctuations.

IMG_9208

I will discuss some basic neuroscience before going into the dynamics of mental illnesses. There are many ion currents that pass through a neuron membrane such as sodium, potassium, and calcium. The dynamics of these ions are driven by electrochemical gradients. Spiking activity occurs when there is a rapid influx of sodium ions, producing the spike followed by an efflux of potassium ions, returning the membrane potential to the threshold potential.

We can think of a neuron membrane as a capacitor, where positive and negative charges are accumulated on either side. The current is the rate of charge flowing per time, I = \frac{dq}{dt}, and the charge of a capacitor is defined as q = CV. The current through the membrane is this I_m = C_m \frac{dV_m}{dt}. We can think of this system as the circuit shown below:

IMG_9210Because of charge conservation, the sum of the currents across the capacitor and each of the resistors must be 0. In mathematical terms, this is C_m \frac{dV_m}{dt} = -\sum_i I_i.

If we approximate each of these currents as ohmic, they will satisfy Ohm’s law, V = IR, meaning that the current is proportional to the difference between the membrane voltage and the threshold voltage by a factor of 1/R, or in other words, the conductance.

If the conductance were constant over time, these would be linear. However, the conductance depends on the proportion of ion channels that are open and the proportion of channels that are closed, called the gating variables. For example, a sodium current can be described as

I_{Na} = g_{max}m^3h(V_m - E_{Na})

In this system, m and h are the gating variables, and they vary from 0 to 1, and g_{max} is the maximal conductance.

We can think of the dynamical equations for the gating variables as the result of a mass equation. Consider the reaction

Closed \rightleftharpoons Open

Suppose \alpha is the rate of opening of a channel, or the forward reaction above, and \beta is the rate of closing, the reverse reaction above, and both of these rates depend on the voltage. If m represents the proportion of channels that are open, the derivative over time is equal to the  forward rate times the concentration of reactants minus the reverse rate times the concentration of products. In other words:

\frac{dm}{dt} = \alpha(V_m)(1-m) - \beta (V_m)m

Another form of this dynamical equation commonly seen in the literature is:

\frac{dm}{dt} = \frac{m_{\infty}(V_m) - m}{\tau_{Na}(V_m)}

\tau_{Na} is the voltage-dependent time constant, and m_{\infty} is the steady-state proportion of open channels as a function of voltage.

The dynamical equation for voltage for the simple NaKL model is as follows:

    \[\frac{dV}{dt} = g_{Na}m^3h(E_{Na}-V) + g_K n^4 (E_K -V) + g_L (E_L - V) + I_{inj}C^{-1}\]

Neuronal networks are the result of multiple neurons connected to one another through synapses. Pre-synaptic neurons deliverer chemicals, called neurotransmitters, to post-synaptic neurons. Some neurotransmitters are excitatory, such as NMDA (N-Methyl-D-aspartic acid), meaning they increase the likelihood of spiking activity, and others are inhibitory, such as GABA (gamma-aminobutyric acid), meaning that they decrease the likelihood of spiking activity. To describe the networks of neuronal networks, each individual neuron has a voltage equation as illustrated above, with additional terms relating to its synaptic currents. These currents depend on the synaptic conductance, the difference between the membrane voltage and the threshold voltage, the strengths of the synaptic connections, and the fraction of open channels for each receptor. The dynamical equation for the fraction of open channels usually depends on properties of the presynaptic neuron.

So far, the variables we have been considering have been the voltage and the gating variables. In order to discuss the dynamics of mental illness, we must think about another important variable: firing rate. This simply describes the rate of voltage spikes over time. Below is an example of a phase plane, where the vertical axis is the average firing rate of inhibitory neurons, and the horizontal axis is the average firing rate of excitatory neurons.

IMG_9209

In this system, the fixed points can be thought of as memories or goal-states, and we can use this system to consider the effects of the underlying dynamics on working memory or decision making. Increasing the depth of the basin of attraction can have the effect of increasing the stability of the state, while flattening the basin of attraction reduces the stability of the state.

This paper highlights the key role of dopamine in altering these attractor dynamics. Stimulating the D1 dopamine receptors has the effect of increasing firing activity of both excitatory (NMDA) and inhibitory (GABA) neurons. This alters the parameters of the system, in particular, the strengths of synaptic connections, over time. As a result, the basins of attraction are deepened, and the state is more stable and robust to external perturbations or noise fluctuations.

Stimulation of the D2 dopamine receptors has the opposite effect, flattening the basins of attractions. These flat attractor landscapes could lead to disorganized or spontaneous thoughts that can be experienced as hallucinations that are characteristic of schizophrenia. This can also explain the high distractibility in attention-deficit hyperactive disorder (ADHD). On the other hand, Obsessive Compulsive Disorder (OCD), a disorder characterized by rumination, invasive and recurrent obsessions and compulsions, can be linked to deep basins of attractions that are robust to potential distractors. Major Depressive Disorder characterized by a coexistence of rumination and a negative mood with lack of concentration and distractibility, and one can think of it as an imbalance between multiple attractor states.

The main point this review paper aims to illustrate is that in order to characterize and develop treatments for mental illnesses, one must consider the underlying network dynamics. The suggested role of dopamine in altering the depth of basins of attractions suggest that we might try to target the dynamics of schizophrenia patients, for example, through dopaminergic drugs.

I found the process of reading this review paper and the sources it cited extremely helpful for me in improving my understanding of neurons, neuronal networks, biophysics, and nonlinear dynamics, and linking my previous understanding of neurons to cognitive processes, something that I had not fully understood before. Because the review paper goes over the general information, I read many of the papers it cited to find the basis behind some of its claims. However, I still do not clearly understand the mechanism behind the changes in the attractor dynamics. I would like to learn more about how the parameters are changed, and how these changes, in turn, alter the attractor landscapes.

At this point, I believe that the connection between these dynamics and mental illnesses as presented in this paper seems rather speculative. However, I think that as more data is collected and analyzed, and further models are developed to understand the dynamics of neuronal networks, we can glean more insight towards understanding and developing treatments for mental illnesses.

References:

Durstewitz, D., Huys, Quentin J. M., Koppe, Georgia. (2018). Psychiatric Illnesses as Disorders of Network Dynamics. doi: https://arxiv.org/pdf/1809.06303.pdf

Durstewitz, D. (2009). Implications of synaptic biophysics for recurrent network dynamics and active memory. Neural Networks, 22(8), 1189-1200.

Durstewitz, D., Seamans, J. K. (2008). The dual-state theory of prefrontal cortex dopamine function with relevance to catechol-o-methyltransferase genotypes and schizophrenia. Biological Psychiatry, 64(9), 739-749.

Durstewitz, D. (2006). A few important points about dopamine’s role in neural network dynamics. Pharmacopsychiatry, 39(S 1), 72-75.

Izhikevich, E. M. (2007). Dynamical Systems in Neuroscience: MIT Press.

Johnston, Daniel, Wu, Samuel Miao-Sin . (2001) Foundations of Cellular Neurophysiology. MIT Press.

Rolls, E. T., Loh, M., Deco, G. (2008). An attractor hypothesis of obsessive-compulsive disorder. European Journal of Neuroscience, 28(4), 782-793. doi: 10.1111/j.1460-9568.2008.06379.x

Strogatz, S. H. (2018). Nonlinear dynamics and chaos: with applications to physics, biology, chemistry, and engineering: CRC Press.