Tuesday, March 23, 2010

Why I’m at Ypsi every week – and how I spend my time

This is a question I’ve been asked by several people: my advisor, my fiancee (both driven by a shared desire for me to finish up and get a job already), my parents and siblings, even my students on occasion.

I feel like everyone who takes a TF position does it for different reasons – I did it because:

  • I felt like I had a lot of teachers growing up who were kind of crap, and thought I could do a better job
  • I felt ambivalent about engineering going into college, basically only doing it because I wanted a job when I graduated and because my sister was an engineer.  Only later did I discover that some of the stuff engineers do is pretty cool, and began to think that it’s a shame that most high school students don’t understand that.
  • I was curious what the dynamic looks like in a failing public school, because I’m a wannabe public policy wonk and because it’s such a waste to have so much of our population educated so poorly

As a result, I spend 6 hours of time each week, 2 in prep and 4 where I’m actually in the high school and hands-on.  Now, I do graduate research in plasma physics, and I could show a lot of really cool examples of stuff you can do as an engineer in class, but unfortunately I don’t get to do that stuff all the time.  In addition to the points above about why I’m at Ypsi, I also want to address what I do at Ypsi, i.e., to quantify exactly how much time I spend impacting the class, in what manner, and what that means in this context.

On average, of my 4 hours in class each week, perhaps 50% of my time is spent impacting the classroom in some way, i.e., helping a student individually with a problem or talking to the class as a whole about something.  The other 50% is listening to the daily lecture, since ultimately I am supplementing a high school math class and they need to cover some sort of daily lesson from the textbook.

Of the 2 hours each week that I spend interacting with all or part of the class, most of the impact is in teaching individual students or small groups of them how to think about different problems.  Notice that I don’t really like think of it as teaching how to “do” problems, because that makes it feel like I am a server and the student is just a terminal downloading an algorithm from me.  My teaching style is usually to ask probing questions to force students to clarify their own thinking and understand a problem.  This is easily the most rewarding part of what I do as a TF – I really enjoy showing someone else how to think about math, to show them how it makes sense and fits together, to develop their intuition rather than just their rote memorization skills.  I spend 90% of those 2 hours doing this sort of thing. 

The remaining ~ 10 minutes each week is spent doing something I really don’t think anyone who wasn’t a STEM grad student (or even more technically proficient) could do, giving presentations on neat technical topics, like computer graphics or rocket science, or coming up with really neat applications to illustrate a particular concept.  For example, today I talked with one of my calculus students about how fast a personal jetpack, if one existed, might be likely to travel.  So, in a given week, I spend

  • 2 hours listening to the daily lectures
  • 1 hr 50 min teaching problem-solving
  • 10 min doing unique stuff

Note that this is all in averages, so really that 10 min / week is more like a 30-min presentation once in a month.  Incidentally, that occasional presentation is actually a fair bit of work, and is where much of the additional 2 hours prep goes to, again on average and in aggregate.  Other outside time expenditures are things like administering college guidance, writing the blog before you, etc.

I’ve made these two comparisons, the 50-50 ratio of impact/non-impact and the 90-10 ratio of tutoring / neat engineering, because there are two points of view where these comparisons get really important.

From my point of view, the 50-50 breakdown is frustrating because I can see students who aren’t getting what’s being “uploaded” to them during the lecture, but I’m powerless to really interact with them in the upload-download environment during the lecture except through whispers.  Even this is sometimes distracting to the rest of the class, especially with the many students who are apparently physically unable to whisper.  The effect snowballs, where students who don’t get it don’t pay attention, and then they continue to fall behind, and you reach a point where all but 6 of your students have F’s and you can’t help them catch up in the 50% of time you have left.

From the point of view of someone paying me to be there, the 90-10 balance is more troubling.  As far as tutoring goes, I am a gold-plated tutor – I’m a PhD physicist and engineer tutoring algebra I.  That’s like paying an F1 formula racing mechanic to tune up a Honda sedan.  If 90% of my interaction with students could just as well be accomplished by an undergrad for less money, why send me?  Rather than one grad student in the class 4 hours a week, with 2 hours prep, you could pay a sophomore 1/3 the money as work-study or a scholarship, ditch the prep for the engineering presentations and get 18 hours of total face time per week.

On that note, John Scalzi is a sci-fi author whose work I enjoy.  Recently he responded to a question on his blog about why he doesn’t publish his own books and goes through a publishing house (Macmillan) instead:

“What I genuinely have a hard time understanding is why people don’t seem to grasp that becoming my own publisher is an inefficient use of my time. It’s like telling a surgeon how much better his life will be if he’d just lathe his own surgical tools and cook the meals for the patients in the recovery room.”

To paraphrase Scalzi, I suspect that the TF balance right now is not an efficient use of either my time or UM’s money.  An endeavor with peak efficiency <50% probably needs some more design work.

In another recent post, I mentioned how we need to draw lessons learned from the TF program to improve it in the future.  That means not only the kind of Debbie-downer kind of stuff above, what the TF program isn’t doing well, but also looking at what it has done really well.  On that side of the balance, there are serious pluses to having the gold-plated tutor, because I am also a highly trained spy, making (semi-)regular reports back on my findings behind enemy lines in the strange land of high school.

This blog and this feedback is something that I can do that I doubt an undergrad could.  I’ve had real jobs doing engineering, I’ve done research and actually used some of that stuff you see in calculus books for real problems, and I have gone through enough education to have an idea of what good teaching and good students look like.  As the spy sent into enemy territory to figure out what the heck is going on in there, I’m in a good spot to think about how you fix it, or at least how to make as big a difference as possible with limited resources.  While it’s easy to say with 20/20 hindsight that this the current TF program is not as efficient as it could be, that’s true of most things on version 1.0.  It’s how we adapt to that information that is really important – that’s what this redesign process is all about.  Sometimes (a lot of times), inefficient data collection followed by careful sifting is the only way to get from 1.0 to 2.0. 

Return on Investment

The TF program for the last two years has funded about 10 TFs at $4k / year, so around $40,000 a year.  Of necessity, that’s $40,000 that wasn’t spent doing something else, so it’s reasonable to ask whether we are getting maximum bang for those bucks.  That’s what a few recent posts, like the “Here’s $40k, design a K-12 intervention” series are about.

In this post though, I want to ask a little different question -- what constitutes a good bang for the buck?  I know every TF takes the position for different reasons (and in another post I’ll talk about my motivations), but I want to focus on the other side of the equation: is this a good use of UM’s money?  Trying to answer this question, I made a rather startling realization that I’m not certain what UM’s goals are with this program.  So I’ll make a reasonable estimate of what I think they are:

  1. Increase the number of Ypsi students enrolled at UM, esp. in CoE.
    • Currently this number is ~5 across all years.  For contrast, my high school in Plymouth sent around 50 students to UM my year, maybe 20-30 to CoE that year alone.
  2. Be able to point to the TF outreach program in good faith when writing the ‘broader impacts’ section of NSF grant proposals
  3. Make a connection to a K-12 district, and figure out what the best way is for universities to integrate into the K-12 community

The first two are the clearly numeric parts, where you can probably draw out some dollars and cents analysis.  The third may feel like vague bureaucratese, but there is a pretty big disconnect between the public K-12 school districts and the state universities.  I remember being shocked at how hard you had to work in undergrad to succeed, and I came from a pretty good high school.  I sure didn’t know what I was going to major in, and if not for older siblings who’d already been to college I might not have known which classes to take in high school to prepare myself.

So, are these good goals?  Let’s start with increasing YHS enrollment in CoE.  If UM spends $40k a year on TFs, and as an optimistic result let’s say they get an average of 4 YHS students a year into CoE, that’s $10k per student, not counting any scholarships in the mix. 

On NSF funding, current numbers pulled from NSF’s website list, among currently active awards, $373 million awarded to date to University of Michigan PIs (nicely done Ken Wise with $34M to date!).  If, in bulk, this $40,000 effort translates into even a 1% increase in that overall funding level due to increased acceptance rates for CoE proposals, the $10k number above pales in comparison, and we would say this is a very good bang for the buck.

Of course, NSF likes to see quantifiable metrics of success in the broader impact area, and they are sadly unlikely to consider their own previous funding as evidence of success to increase funding further.  Alas, circular funding logic fails – you need concrete results to sustain the momentum.  This is where I should note that I think goal #1 springs from goal #2 as sort of a proxy metric, lacking anything better, but it’s kind of cheating when you consider the scholarships I mentioned (and then promptly neglected) before.  See, the last few Ypsi High students at UM have basically been on full rides, which literally means we’re paying them to come here.  So does that mean we’ve made an impact and broadened the pool of college-bound Ypsi students, or just convinced proportionally more of the same size pool to come here?  No good answer here, I don’t have enough data. 

So, goal 3, the K-12 connection.  UM has indeed built a connection to YHS.  What passes over that bridge?  YPSD is currently in their last year of NCLB probation, and at the end of this year barring a serious course change will to enter the total reboot phase where they break the teachers’ union, fire just about everybody, rehire some, and start fresh.  My father is a teacher and president of his AFT union local, so understand that I am more than a bit perplexed by the heavy-handedness of that action.  And yet, having spent two years up close and personal at Ypsi High, I am also deeply skeptical about the status quo there producing meaningful results lacking this sort of massive boot-meets-rear kind of action.

Anyway, the point is that if these are the metrics of success, probably goal 1 is coming along pretty well, since  I know of at least 3 students who are entering UM this year from Ypsi High.  Goal 2 is going great if my totally out-of-nowhere numbers are correct to within a few orders of magnitude, but is not sustainable without much clearer observed performance improvements, since NSF likes return on their investment (hence the title of the post).  Goal 3 is an area where I think we’ve made good progress, not necessarily in finding the solution but in identifying the relative effectiveness of a couple of ideas like the TF program, which will be undergoing substantial refinement between now and next year.

In close, we’ve kind of skirted the issue of whether this is the “best bang for the buck.”  I think that’s fair, because a) I lack good numbers, and b) this is an open question without a good answer yet anywhere despite lots of really smart people working on it across the country, so give me a break already.  But in terms of using our money wisely and in line with the goals above, esp. #2, I think the course of action is

  1. Draw out lessons learned from the TF program and present a plan to improve it next year
  2. Clarify what the metrics of success are that are driving that refinement and how we’re going to measure them
  3. Do some legwork and make sure the brainstormed metrics we come up with are good ones according to the research that’s out there right now

Now, apparently somebody somewhere agrees with me, because #1 and #2 are what tomorrow’s TF meeting is directed toward.  Of course, #3 is probably above my pay grade, falling squarely in the (OE)^2 office and the CoE leadership.  Stay tuned. 

Gates Foundation Report

This post draws heavily from a report issued by the Gates Foundation last month.  That report is a distillation of 10 proposals submitted by some very high-flying names in the consulting industry – McKinsey and BCG among them – that the Gates Foundation hired to analyze 10 different school districts (sites) across the nation and suggest methods for improvement.

Not surprisingly, a heavy component of what these firms do, and what they suggested the school districts do, is to crunch numbers.  Schools are ready-made for number crunching, because they generate so much easily quantifiable data – grades, attendance records, standardized test scores, it’s a quant’s dream – but they don’t do anything with it most of the time.  Some excerpts from the report:

“Our site had never used data for anything other than compliance purposes prior to this teacher effectiveness planning process. Suffice to say, the way in which we’ve used data to understand our site over the last few months has fundamentally changed how we operate as an organization.”

“Data without analysis are nothing more than a collection of numbers.”

  • “One site cited its limited strategic use of data as both an advantage and disadvantage. On the one hand, the site has a clean slate to build a new mindset around data and their use. On the other, the challenge to transform the site’s historically compliance driven data culture requires a significant departure from past practices and remains the focus of an intense change process.
  • “Another site noted the advantage of having a “culture of data orientation that pervades all levels of our site and our schools.” While it took a long time to achieve this culture, the site then had a huge head start in pursuing more sophisticated uses of data to improve teacher effectiveness.
  • “One site is investing in systems improvements to access new types of human resources data that will enhance its pre-existing value-added measures.
  • “One site’s teachers not only receive regular student data reports, but they also are trained in how to use such data to make adjustments to instructional strategies.
  • “One site credited the formal and informal use of data in conversations with principals and school leaders for its success in defining and evaluating accountability targets linked to school performance bonuses.

Data is a terrible thing to waste, and that’s a big chunk of what the report is all about.  In particular, I like this quote:

“Clearly, the ability to link student and teacher data is a necessary prerequisite—if not the linchpin—to define and measure teacher effectiveness.”

That’s the ultimate goal of the data analysis – identify which inputs produce measurable changes in your outputs (and in the right direction!), then figure out which ones do it for the least money, then do as many as you can with your limited budget.  Go figure, eh?

A note on x’s, y’s and pedagogy

Sometimes, variable names just make fall-over good sense.  Like using v for velocity, or d for distance, h for height, t for time, these are all fantastic.  Other times, variable names just get grandfathered in for no good reason.  Like, say, s for displacement (which is what, again, students ask?  oh, how far it went, ok) or m for slope, which we all use because that’s how we learned it and byGodifitwasgoodenoughformeit’sgoodenoughforyouwhippersnappers bah!  Get off my lawn!

But top of the list on bad variable names: x and y.  Oh yes, my venerable variables, for teaching you are atrocious.  Know why?  Because nothing starts with x or y! X-ray machines?  Xylophones?  Xtra clean socks?  Yellow submarines?  Yearly physicals?  Youtube videos?  None of these are any good at all for trying to explain to someone why 3x+2y is neither 5x, 5y, nor simply 5, or why 3-x is not 2, or 2x, or any combination in between. 

I know x and y have firmly entrenched themselves in the math psyche, and that’s likely to change about the same time you see a snowball fight in hell.  But for the love, couldn’t the first introduction to letters as variables be something simple, like a and b?  How much easier would it be to talk about apples and bananas than xylophones and yams, xeroxes and yachts, xenophobes and yurts,  xenon atoms and yeomen?

Monday, February 15, 2010

Design Problem 2: Brainstorming Options

The goal this time around is to brainstorm ideas that seems promising.  We’re not going to worry yet if they fail one of the design criteria from before; we’re more focused right now on thinking of ideas that meet at least one criterion than throwing out ideas that don’t meet all of them. 

To organize the brainstorming, let’s categorize efforts by age group.  Based on class readings like Whatever it Takes, the Paul Tough book about the Harlem Children’s Zone all the TFs read earlier this year, by the time a student in a bad district is in high school, on average they’re already several grade levels behind.  This suggests we target broad catch-everyone programs at younger ages where that achievement gap is narrower and hasn’t had a chance to fester, and focus our initial efforts at the high school level on the few odds-beaters who have shown promise and achievement in spite of their environment.  The goal should be to build a good base of participants at the younger age group and follow them as they get older. 

We’ll start with some ideas (bullet-pointed) for program-based earlier interventions.  Based on our limited resources, a good approach may be to come up with existing program infrastructures where we can lend technical expertise.  Ideally this setup allows us to limit planning efforts and other start-up costs and get the ball rolling immediately in a short timeframe.

  1. FIRST Robotics
    • FIRST is a program that exposes kids to building robots to meet a design challenge.  The challenge is different each year.  http://www.usfirst.org/.
    • Pros:
      • Direct exposure to hands-on technical activities
        • Design, build, test ties in especially well to engineering
        • Opportunities to enrich hands-on work and designs with basic math to solve small sub-problems – relevant application of math
      • Comes bracketed into different age-appropriate segments
        • Can start targeted to younger segments and expand into later years with same students
    • Cons:
      • Expensive
  2. MathCounts
    • MathCounts is a middle school extracurricular program that revolves around preparation for a spring multi-school competition in mathematics.  Schools can field multiple “teams” of 4;  judging is based on individual and team problem-solving.    Local individual and team winners can proceed to state and national level competitions. 
    • Pros:
      • Junior high age group, more receptive
      • Teaches high-school level math in a one-off problem format, emphasizing creativity over formulaic skills
      • Can be treated as a contest or game as motivation, but without adverse consequences for failure
    • Cons:
      • Probably better geared to undergrads (or even high school students?) as mentors than PhD’s (maybe a PhD as overall admin to lead the undergrad / HS student effort?)

Self-started lower age-gr0up options: I’m not coming up with any good ones here yet.  Opinions?  Leave a comment.

High School Options:

4.   Semester-long Seminar Electives (Seminars)

  • Science electives on topics of interest to grad students who will serve as primary teachers
    • plasma physics
    • biomedical topics – artificial joints, etc.
  • courses open to top-performing, interested seniors and juniors (by application?)
  • classes taught by the grad student as a full GSI-supported position
  • Classes heavily focused on labs and demos
  • Include a design component to incorporate what the students learn
  • Pros:
    • great opportunity to continue with lessons learned from current TFs
    • Focuses on likely future UM applicants
  • Cons
    • Very heavily dependent on the skills and emphasis of the PhD who is teaching.  Probably best to do a team-teach scenario to share the load, have a partner to keep each other motivated / prevent being discouraged.
    • Likely teacher backlash for “stealing” their best students.  Wah.  This is why it needs to be marketed as an elective. 

5.   Magnet Science Courses with Heavy Math Emphasis

  • As opposed to the above “advanced elective”, this is an equivalent course for, say, earth science, bio, chem, phys, the basics
  • Still heavily demo and lab-based.  Bring in a seismometer, learn logs while learning the Richter Scale.  Do Brinell hardness testing with real MSE equipment while learning rock types.  Do DNA resequencing in Bio I with sweet equipment from a lab here.  Bring cutting edge to the class, and then present it well.
  • Still require top performing and application for admission 
  • Also GSI-supported
  • Pros:
    • Earth science was boring.  Bio was a lot of rote memorization.  Chemistry was a little better, physics better still.  I think not coincidentally, that’s also about the order of increasing hands-on learning in those classes. 
    • Including labs takes expertise, and that’s what UM can provide. 
  • Cons:
    • This is a stopgap measure to prevent losing the odds-beaters who are still engaged in 9th-10th grade before the college application process kicks in.  This is also a task that would likely require 2 PhD’s per class.
    • Teacher backlash, teacher backlash, teacher backlash.  See above, except this time it’s not an elective, it’s their turf. 
    • You’re going to need a full-time teacher present, to address legal liability / trained teacher present issues.  You’re also going to need to make this teacher the backup, secondary to primary instruction by the PhD’s with demos.  Depending on the teacher, this may be a very tough (pride issues) or very easy (less stress teaching) issues.
    • You’re also going to need that teacher doing the grading and administrative to keep the PhD’s under whatever half-appointment tuition time commitment level you’re looking at.
    • The quality of the PhD is again critical here.  It’s better not to do it at all than to do it poorly. 

Other ideas:  This one doesn’t fit into an age bracket, because it’s not an in-school idea.  It’s one that can be done safely from anywhere you’re in front of a computer, though it would likely benefit from insights gained from the TF program. 

  1. Data Analysis for Student, Teacher Performance Evaluation Metrics
    • Fact is, UM involvement doesn’t necessarily mean we need to be on the front lines with the kids.  A heckuva lot of PhD and professorial types are terrible with kids, but great with numbers.  Leverage that – create better tools to keep track of the numbers.
    • Get a couple of PhD’s (maybe School of Ed. plus an EECS coding monkey) to partner on writing an open-source software total monitoring system that can keep a teacher abreast of student issues at a glance, and keep the student appraised of their performance as well.
    • Aggressively support and direct these students in applying for external fellowship funding so they don’t have to cut into your $40k
    • Pros
      • Schools pay good money to get all their grades, attendance, data taken care of.  Dead tree grade books are out.  But those systems kind of suck, and I’d bet a dime to a dollar that the data isn’t effectively crunched.  Build the system, give it away and you have built-in access to the (anonymized) data.  It’s the Google model.
      • Your product has immediate applicability anywhere, not just in Podunk School District.  Broad impact.
      • If you can monetize the product without subjecting your customer (the school districts) to high deployment costs, you have the potential to spin this off as an enterprise that can fund itself and expand.  The issue here is figuring out a revenue model that, if we follow the Google example and guess ads are involved, remains tasteful and refrains from projecting a crass commercialistic veneer over a fundamentally beneficial endeavor.
        • You don’t want text messages automatically sent out to failing students advertising fly-by-night overpriced tutoring services.  This is the nightmare scenario.
    • Cons
      • This requires substantial creativity, vision and focus, and it does not offer any of the warm, fuzzy feel-good aspect that direct involvement with the students does.
      • The district can always tell you to get lost, they don’t want your system
      • You need really good intel on the school’s desires to gear your tools to be appealing to them.  You also need good insight into what they ACTUALLY need, not their own perceptions of same, to add value to the product.

For some good info on why better performance metrics are a really good idea, see these links:

The two reports share an identical intro section, but they are different once you get about 7-8 pages in. More on this in a subsequent post. 

Thursday, February 11, 2010

Outlining the Design Space (or, what can $40,000 buy you?)

Your budget is $40,000 / year.  You have access to a wealth of scientific personnel from a major research university – undergraduates, graduate students and faculty.  You’d like to turn that money and access into a reinvigorated mathematics, science and engineering education at the K-12 level.  Let’s treat this as a design problem, starting by making a concrete list of our resources, constraints and objectives:

Resources:

  • $40,000 / yr
  • Access to Tier I research university students and faculty
  • Access to cutting-edge technical research
  • Access to current education “best-practices”
  • Access to a local K-12 school district

Constraints:

  • Quantifiable results must be achieved within 2 years to retain funding
  • School district is failing, student achievement is poor, and district resources are nil.
  • Student population is predominantly minority.  University population is predominantly white. 

This is the lay of the land.  Like many design tasks, here we have a poorly posed objective.  Knowing how designs without clear goals tend to wander, let’s try to tighten up that objective statement a bit:

Objective:

  • Leverage program access to university technical resources and personnel to increase K-12 student STEM interest and exposure.
  • Measure student STEM exposure as number of students exposed to hands-on work with university research-related topics for some minimum rate of exposure and minimum duration, i.e., number of hours/week and weeks/year.  Select a minimum criterion and justify it.
  • Demonstrate retention of participating students year over year.
  • Develop other metrics of success which can be quantified and evaluated.  Justify the relevance of these metrics. 

This is still somewhat poorly posed, but it’s a little better.  For example, the measure of exposure is somewhat arbitrarily chosen by me.  Basically, you need to cut down your design space somehow, and since I think the chance of raising interest correlates with increased exposure, I linked them in that first objective.  Then I said that I want a program that grabs kids for a certain amount of time, does it consistently every week and over a long span of weeks, and potentially keeps them involved for multiple years.

That last objective is a catch-all to recognize that these objective statements still need some improvement.  Luckily, design optimization is an iterative process, so we’ll be revisiting these objectives.  But first, in my next post I’ll start brainstorming solutions and see what I can come up with. 

Wednesday, February 10, 2010

Algorithms: The first stage in learning or the final stage in understanding?

Consider the above as a prompt, a thought experiment.  Is it better to use algorithms as the first step in a learning process, or as the culmination?  This really hinges on that word better, as in, better for what purpose?  Better for learning what?  As we all know, the question is rarely asked, “Is our students learning?”  -- but I don’t know that we’re all on the same page about what we’re trying to teach!

First things first, let’s have a definition of terms for the lay reader.  An algorithm is a series of explicit steps to accomplish some task.  Here’s an example taken from my class’ algebra textbook of an algorithm to add two numbers:

Rules of Addition:

  • To add two numbers of the same sign,
    1. Add their absolute values
    2. Attach the common sign
  • To add two numbers with opposite signs
    1. Subtract the smaller absolute value from the larger one
    2. Attach the sign of the number with larger absolute value

I don’t want to be polemic, but reading that I get the mental image of that screeching sound of a sudden halt that you hear on TV.  It hits me right around the  point where I get to Subtract the smaller absolute value from the larger one… what?

At the risk of sounding arrogant, I feel like if I read something in an introductory math book and I don’t get it the first time around, despite just about having my PhD in Applied Physics, something is amiss.  It’s not a foolproof test, but it does have a good success rate.  Still, I value the input of you, my dear (imaginary?) readers, to tell me if I’m off-base here.

Frankly, I think math is one of those cases where you have to be very careful about the balance between the general and the specific.  Normally, I’m all about teaching things in the context of a general framework.  My calculus class, for example – they are great at applying things like the product or quotient rule, provided you give them two functions named f and g and tell them to find the derivative of fg or f/g.  But, give the problem in the book a different spin so that it doesn’t match up verbatim with what they’ve seen before, and… disaster.  They’ve just memorized specific ways to do specific problems, without having a framework to put it all in.

Nevertheless, I want to draw a distinction between a conceptual framework and an algorithm.  Let’s take an example from algebra: finding the equation for a line.  As it happens, there are really two fundamental ways to do this, and students are required to learn both.  They are:

  1. Slope-intercept form, y = m*x+b
  2. Point-slope form, (y-y1) = m*(x-x1)

Now, an algorithm for slope-intercept form would say:

  • If you are given two points (x1,y1) and (x2,y2):
    1. Find the slope of the line (m) between the two points as m = rise / run =  (y2-y1) / (x2-x1)
    2. Choose one of the two points; we’ll assume you chose (x1,y1).
    3. Find the y-intercept (b) by solving the equation y1 = m*x1 + b for the value of b
    4. Write your final answer in the form y = m*x + b, where x and y are variables and not the specific values for either point
  • If you have a point (x1,y1) and a slope m:
    1. You already have the slope m, so just do steps 3 and 4 from above.
  • If you have a point (x1,y1) and another line in the form y = m*x + b that you are told is parallel to the line through the point (x1,y1)
    1. Parallel lines must have the same slope, so you know that the slope of your line is the same as the slope m of the parallel line, so that’s your m too.  Do steps 3 and 4 from above.
  • If you have a point (x1,y1) and another line in the form y = m*x + b that you are told is perpendicular to the line through the point (x1,y1)
    1. Perpendicular lines have negative reciprocal slopes, so you know that your slope is –1/m where m is the slope of the other line, i.e., if it was y = 4*x+5 then your slope would be –1/4.  Do steps 3 and 4 from above.
    2. Don’t get confused by using m’s in two places here.  The point is that both m’s are slopes of different lines, but  (your slope) = –1 / (their slope).

I don’t think that’s even a complete algorithm!  They could give the other line in point-slope instead of slope-intercept form, they could tell you the point by specifying the intersection of two other, completely different lines that your third line has to pass through… the point is that the problem can be arbitrarily complicated, and thus so can your algorithm.  We haven’t even touched on point-slope yet!

Trying to extend an addition algorithm including absolute values may be good computer programming practice, but it’s not good pedagogy.  Here’s my counterexample of a decent conceptual framework for the finding the linear equation:

  • Start by looking at a line on a graph.  Ask yourself, how can we distinguish this line from any other line we might draw?  What makes it unique, one of a kind?  How could I make any line look like any other if I could stretch it and push it and pull it and move it around?
  • It turns out there are only two ways to change it, two things that are important and that make a line a line.  One is how steep it is, which we call a slope.  A hill is really steep if it goes up really far without going very far horizontally, so for example a handicap ramp would not be very steep, so it would have a small slope, while a staircase might have a large slope and an elevator would just have a super-enormous slope.  So, to decide how steep a line is, I need two pieces of information:  how much does it rise, and how much does it go horizontally, which we call how far the graph runs from left to right.  The slope is just the rise divided by the run, so slope = rise/run.
  • Remember that there’s another thing we can change – I could shift a line up and down or left and right.  For example, a handicap ramp has the same slope whether it’s on the first or second story, or in this room or the next, but those are all different ramps.  In math, we just talk about it moving up and down, because our lines go on forever so a shift up and down can look the same as a shift left or right (Something about pictures and kilo-words comes to mind here, alas).
  • So, we need two pieces of information to talk about lines: a slope, and how far up or down it should be shifted.  If we have a slope and a point the line has to go through, we can pin that line down and know exactly where to draw it.  But sometimes, we can be tricky about how we give our two pieces of information.  For example,
    • we could give two points instead of a slope and the line – then we could figure out from the second point how much the line would rise and how much it would run from the first point, so the second point would hold the key to finding the slope.
    • we could give one point, and then tell you the slope of a different line that we said was parallel to the first.  Parallel lines are like handicap ramps on another story or in another room – they have exactly the same slope, just a different shift up, down, left or right.  So you’d have a point and you’d know your slope was really the same as the other line’s slope!
    • kind of like above, we could give one point and then tell you the slope of another line that was perpendicular -- (discussion of the –1/m bit would require a picture).
  • Of course, the upshot is, you always need two bits of information: a point to pin down your line in one spot and a slope to decide how steep to draw it.  It just happens to turn out that there are a lot of ways to represent that second bit of information, the slope, with other facts like the location of a second point, or the slopes of parallel and perpendicular lines.

If you actually read all that, bless your heart.  Brevity, I ain’t got it.  I could go on to talk about how we could pick a special spot, the place where the line crosses the vertical (y) axis as our point that we’ll always use, but I think the dead horse, she is beaten. 

To return from our super-long and in-depth example, I feel like having a discussion about that framework is crucial – I don’t know that any of my kids really understood that the difference between slope-intercept and point-slope form of a line is just that point-slope is a generalization to a fixed point that doesn’t have to be on the y-axis.  They couldn’t conceive of it as a generalization, because they didn’t know what made us pick that specific point in the first place!  It was as if God came down and said, “Thou shalt use the y-intercept”, not like we discussed it and talked about why it made things simple.

So, now let me see if I had a point in my longwinded-ness here…. ah yes, found it, the balance between generality and specificity in math teaching.  I distinguish between generality as an overarching conceptual framework, to give you a roadmap when you’re learning something new so you can figure out how it fits with other stuff you know, and generality as in an algorithm that allows you to handle any arbitrary set of inputs correctly.

I think the right order to teach those things should be something like, start with a discussion of the framework, so you get a preview of what we’re going to do and why, then do all the specific cases and drill the hell out of them, constantly referring back to where we are in our framework at every step, then at the end you make your students write the algorithm so you expose any remaining flaws or gaps in their thinking with devious, pathological algorithm-breaking test cases.  Discuss.