Patrick Flynn
Computational Math, Scientific Computing, High Performance Computing,
Machine Learning, and Software Development
Education
M.S. Applied Math, CSULB (Cal State University, Long Beach)
M.S. Computer Science, UCI (University of California, Irvine)
B.S. Computer Science, UCI
B.S. Mathematics, UCI
B.S. Economics, CSPUP (Cal State Polytechnic University, Pomona)
Jobs/Internships
Research Scientist, Jan. 2019 - Present
Mesoscale Modeling Section, Marine Meteorology Division
Naval Research Lab, Monterey, CA
Project: Neptune Next Generation Atmospheric Prediction System for the Navy
C/C++, Fortran, HDF5, Bash, Make, CMake, Git/GitHub/Bitbucket, NWP (numerical weather prediction)
- Designed/Created/Maintain HDF5 custom Fortran software library to simplify and make consistent the use of the canonical HDF5 software library
- Designed/Created/Maintain software library to read in Neptune's output specification YAML files. Relies on canonical LIBYAML C library to scan the YAML files.
- Designed/Created/Maintain a nearest neighbor library. Relies on KDTREE2 Fortran library.
- Designed/Created/Maintain Neptune's Make build system
- Designed/Created/Maintain Neptune's GitHub (Actions) CI (continuous integration) system
Student Intern, Summer 2018
US DOD High Performance Computing Internship Program (HIP),
Naval Research Lab, Washington, D.C.
Evaluated the Google TensorFlow machine learning framework's suitability for
implementing traditional physics-based algorithms. If this can be done well enough, then
these algorithms can take advantage of TensorFlow's ability to automatically utilize NVIDIA
accelerators without the developer having to write CUDA code. Wrote Python/C/C++
code in a mix of sequential, OpenACC, CUDA, and TensorFlow versions in order to compare
feasibility and speed. Also learned to use TensorFlow for its principal machine learning and
deep learning purposes. Reviewed and extended my machine learning, deep learning, and
neural network skills.
Student Intern, Summer 2017
US DOD High Performance Computing Internship Program (HIP),
Naval Research Lab, Washington, D.C.
Assessed the Virginia Tech CU2CL CUDA to OpenCL translator. Wrote sequential C implementations
of several numerical analysis routines. Manually translated these codes into CUDA. Translated the
CUDA codes into OpenCL, both manually and using CU2CL. Also wrote OpenACC versions of the
sequential codes. Compared and contrasted performance between sequential, CUDA, OpenCL, and
OpenACC codes across multiple hardware platforms, including an HPC multiGPU system.
Visiting Graduate Researcher, Summer 2016
UCLA Applied Math REU, Particle Slurry group
Researched constant-flux viscous particle-laden fluid flow on an incline:
physical experimentation, math modeling, and numerical simulation (MATLAB).
Student Intern, Summer 2014
National Ignition Facility (nuclear fusion),
Lawrence Livermore National Laboratory (LLNL), Livermore, CA
Developed an automated system (Perl/MATLAB/Oracle SQL) to track
and assess scientific camera neutron radiation damage in the target bay of NIF.
Student Intern, Summer 2012
Physics and Life Sciences,
Lawrence Livermore National Laboratory (LLNL), Livermore, CA
Developed High Precision Beta Radiation simulation (HPBETA)
in C/C++ (standalone and CERN/ROOT components).
Programming/ |
Languages: | C/C++, Fortran, Python, SQL, MATLAB, Java, Perl, PHP, C#, Mathematica, SAS, HTML, JavaScript, CSS |
Frameworks: | TensorFlow, CERN/ROOT | |
HPC: | pthreads, OpenMP, MPI, CUDA, Xeon Phi, OpenACC, OpenCL, Amazon AWS | |
Dev Tools: | Git/GitHub/Bitbucket, GDB, DDD, JUnit, PBS Pro | |
OSs/Other: | Linux/Bash/Scripting, Windows, Macintosh, MS Office |
Mathematics
Courses:
ODEs, PDEs, Stochastic DEs, Nonlinear ODEs/Dynamic Systems, Numerical Analysis, Finite Element,
Finite Difference, Scientific Computing, Calculus of Variations (including Optimal Control),
Applied Analysis (including Distribution Theory: weak solutions of ODEs/PDEs),
Math Modeling (took twice: one with classical ODE/PDE/SDE approach and the
other with data analysis approach), Matrix Methods in Data Analysis and Pattern Recognition,
Metric Spaces, Real Analysis (undergrad and graduate), Complex Variables, Linear Programming,
Nonlinear Optimization, Convex Optimization, Group Theory, Discrete Math, Probability Models,
Math Statistics, Regression Analysis, and Random Processes.
Self study:
11/2018 - Discovering Knowledge in Data: an introduction to data mining, Daniel T. Larose
08/2018 - Learning From Data: A Short Course (Abu-Mostafa, Magdon-Ismail, Lin), including e-Chapters on Similarity-Based Methods, Neural Networks, and Learning Aides
04/2018 - Introduction to Probability and Statistics, Mendenhall/Beaver/Beaver (review)
12/2015 - Elementary Differential Equations and BVPs, Boyce/Diprima (review for a graduate PDE class)
12/2015 - Calculus, Stewart (review all material and prepare for use of vector calculus in a graduate PDE class)
High Performance Computing
HPC Programming languages/environments:
pthreads, OpenMP, MPI, CUDA, Xeon Phi, OpenACC, OpenCL, Amazon AWS
Courses:
Concurrent Parallel Programming, Fall 2016, CSULB
Theory/Applied, pthreads, OpenMP, MPI, CUDA, Xeon Phi, Amazon AWS
High Performance Architectures and Their Compilers, Spring, 2011, UCI
Principles of Operating Systems, Winter 2008 (student), Spring 2011 (TA), UCI
Covered aspects of concurrency: processes, threads, scheduling,
critical sections, mutual exclusion, deadlocks, synchronization schemes, etc.
Training:
Intel Xeon Phi Seminar, Santa Clara, March 30-31, 2015
High Performance Computing Workshop, LLNL, June 9-13, 2014
LLNL supercomputers, Linux clusters, batch systems, MPI, OpenMP,
POSIX Threads, parallel debugging
XSEDE Workshops, Pittsburgh Supercomputing Center, most via satellite telecast
MPI, UCLA, June 17-18, 2013
OpenACC, Harvey Mudd College, April 1, 2013
Multicore Programming Summer School, UIUC, June 22-26, 2009
Java Parallel, OpenMP, MPI, CUDA, Intel TBB
Self study:
Fall 2017 - Parallel Programming and Optimization with Intel Xeon Phi Coprocessors (2nd edition), Vladimirov, Asai, and Karpusenko
Spring/Summer 2017 - review and extend CUDA knowledge in preparation for and during 2017 summer internship
Spring/Summer 2017 - Heterogeneous computing with OpenCL, revised OpenCL 1.2 edition, Benedict Gaster
Computer Science
Courses:
Algorithm Design and Analysis, Advanced Data Structures, Computational Geometry,
Artificial Intelligence, Machine Learning, Database Theory, Oracle, Distributed Computing,
Theory of Computation, Computer Architecture (included assembly language),
Digital Logic, Software Engineering, Operating Systems, Programming Languages,
Compilers, Computer Simulation, Multitasking Operating Systems (Unix),
Statistics/Probability in Computer Science, Internet, Bioinformatics,
Java, C/C++, Visual Basic, SQL, HTML/JavaScript, CSS.
Project courses:
Database - Video rental application (WWW, JavaServer Faces, SQL)
AI - ASL fingerspelling identification (neural network, Visual C#)
3D Computer Vision - Converting 360 degrees worth of 2D pictures of an object to a 3D model (MATLAB)
Self study:
05/2019 - Review/extension of C/C++, Fortran, HPF5, Bash scripting, and Git/Github knowledge while waiting for NRL security clearance process to finish
11/2018 - Discovering Knowledge in Data: an introduction to data mining, Daniel T. Larose
08/2018 - Learning From Data: A Short Course (Abu-Mostafa, Magdon-Ismail, Lin), including e-Chapters on Similarity-Based Methods, Neural Networks, and Learning Aides
05/2018 - Guido van Rossum's online Python 3 tutorial (update Python 3 skills and Python cheatsheet)
05/2018 - Introduction to Python for Computational Science and Engineering (update Python 3 skills and Python cheatsheet), Hans Fangohr
12/2015 - Core Java: Volume 1 - Fundamentals, Horstmann and Cornell (review Java and clean up Java cheatsheet)
OS/Software skills:
Linux/Bash, Windows, Macintosh, MS Office
Programming languages:
With an M.S. in Computer Science and a B.S. in Info and Computer Science, my internships,
and programming on my own, I've been exposed to a lot of programming languages.
If I need to use a programming language but don't know it that well or at all,
I can gain a good understanding of it fairly quickly.
Some links to my language cheatsheets are below. They help me to remember the differences
(sometimes subtle) between languages that tremendously impact how code is written.
C, C++, Fortran 77, Fortran, Java, and Python
Programming philosophy:
I strive to efficiently write code that is easily understood and highly modifiable/extensible by
myself and others tomorrow or a year from now. To this end, I take great care in variable naming,
white space, and documentation; enough care, but not too much so that it slows down development.
I don't necessarily use the cleverest algorithm but one that is most easily understood by others
and meets the required level of performance. I feel a great responsibility in striving to
efficiently develop software to this ideal so that my code is of higher value to whomever I am
creating it for.
Even when I take a software development class or am learning a language on my own,
I code to this ideal as if it were professional work. This helps to form a strong habit of
quickly writing good code as I create it, not later on with extensive revision and adding
comments as an afterthought.