Skip to content

James Williams

Category: Technology

Pulling a Smartsheet table into Microsoft Excel using Power Query

Well if you thought my first post in eight months would be exotic, go ahead and smash that back button.

I use this technique when we have one-off assignments at work where I need a quick and dirty web-based data store that several people can collaborate on, and that can be easily queried in Excel without any intermediate infrastructure or processing. This would be quite trivial if not for Smartsheet’s intractible API format.

Assuming you have a Smartsheet grid you want to mirror in Excel and that can be refreshed on the fly, you’ll need the sheet’s ID and an API bearer token for a user with viewer permissions.

In Excel, open Power Query and create a new query using the advanced editor. Make sure to replace $SHEET_ID and $BEARER_TOKEN. The query will bring in both your data and column headers.

let
    Source = Json.Document(
        Web.Contents("https://api.smartsheet.com/2.0/sheets/$SHEET_ID", [
            Headers=[
                #"Content-Type"="application/json",
                Authorization="Bearer $BEARER_TOKEN"
            ]
        ])
    ),

    // Process rows
    RowsData = Source[rows],
    RowsTable = Table.FromList(RowsData, Splitter.SplitByNothing()),
    ExpandedRows = Table.ExpandRecordColumn(
        RowsTable,
        "Column1",
        {"id", "rowNumber", "expanded", "createdAt", "modifiedAt", "cells", "siblingId"},
        {"ID", "RowNumber", "Expanded", "CreatedAt", "ModifiedAt", "Cells", "SiblingId"}
    ),
    ExpandCells = Table.ExpandListColumn(ExpandedRows, "Cells"),
    ExpandedCellsDetails = Table.ExpandRecordColumn(
        ExpandCells,
        "Cells",
        {"columnId", "value", "displayValue"},
        {"ColumnID", "CellValue", "CellDisplayValue"}
    ),
    RemovedCellsMetaColumns = Table.RemoveColumns(
        ExpandedCellsDetails,
        {"ID", "Expanded", "CreatedAt", "ModifiedAt", "CellDisplayValue", "SiblingId"}
    ),
    PivotedCellsByColumnId = Table.Pivot(
        Table.TransformColumnTypes(RemovedCellsMetaColumns, {{"ColumnID", type text}}),
        List.Distinct(Table.TransformColumnTypes(RemovedCellsMetaColumns, {{"ColumnID", type text}})[ColumnID]),
        "ColumnID",
        "CellValue"
    ),
    CleanRowData = Table.RemoveColumns(PivotedCellsByColumnId, {"RowNumber"}),

    // Process columns
    ColumnsData = Source[columns],
    ColumnsTable = Table.FromList(ColumnsData, Splitter.SplitByNothing()),
    ExpandedColumns = Table.ExpandRecordColumn(
        ColumnsTable,
        "Column1",
        {"id", "title"},
        {"ColumnID", "ColumnTitle"}
    ),
    ColumnTitlesMapped = Table.Pivot(
        Table.TransformColumnTypes(ExpandedColumns, {{"ColumnID", type text}}),
        List.Distinct(Table.TransformColumnTypes(ExpandedColumns, {{"ColumnID", type text}})[ColumnID]),
        "ColumnID",
        "ColumnTitle"
    ),

    // Add headers
    CombinedDataTable = Table.Combine({ColumnTitlesMapped, CleanRowData}),
    FinalData = Table.PromoteHeaders(CombinedDataTable)

in
    FinalData

How to checkout and edit a pull request locally

Let’s say you have a dependabot pull request and Charlie Marsh has added a new check to Ruff that causes your lint check to fail. You can fix the lint error and push the changes back to the pull request branch!

First, checkout the pull request locally:

# In this case, I'm updating ruff to v0.0.278
git fetch origin dependabot/pip/ruff-0.0.278
git switch --track origin/dependabot/pip/ruff-0.0.278

We’ve now checked out the PR branch and set it to track the remote. We can use this pattern to keep tabs on long-running PRs, or as in this case, simply push an additional patch before merging. If you’d like a more friendly local branch name, you can append the :my-branch-name to the end of the git fetch call, and then call git switch my-branch-name to check it out; just keep in mind that this won’t set the local branch to track the remote.

In my case, this ruff release does not provide any new rule categories and my lints still pass, however I’d like to update the ruff version in my .pre-commit-config.yaml file so that it’s consistent with my requirements.txt. I’ll make that change, commit and push back to the remote.

git add .pre-commit-config.yaml
git commit -m "Update pre-commit config."
git push

At this point, your checks should fire again and you can merge using your preferred merge method into your trunk. Check out real pull request to see how this looks server side.

Running a local Kubernetes cluster with Kind: A step-by-step guide

It has happened. I thought I could avoid it, but here we are. As if getting your program to run on one computer wasn’t hard enough, now we have to run it on multiple computers at the same time? They have played us for absolute fools.

Anyway, assuming we have some shared experience with Docker, let’s introduce some terminology:

  • A Pod is the smallest deployable unit in Kubernetes, often a single instance of an application. As I understand it, a pod is the logical equivalent of a container.
  • Nodes are the machines that host these pods. More nodes allow for more redundancy.
  • A Cluster is a set of nodes with the same job. A cluster can run multiple nodes, and a node can run multiple pods, and a pod typically consists of between two and fifteen orca whales.
  • A Service is an abstraction which provides a single network entry point to distribute traffic across the cluster.

For local development I am using Kind, a tool which allows you to run Kubernetes clusters in Docker containers. It is a lightweight way to run docker containers inside kubernetes inside a docker container (pause for effect).

The command to create a cluster is: kind create cluster

To deploy the application, it needs to be packaged as a Docker image. After creating the Dockerfile, the image is built and loaded into the Kind cluster with the following commands:

docker build -t my-image-name .

kind load docker-image my-image-name

I should note that in addition to Kind, there is a tool called minikube which is similar, though it requires you to set up a container registry.

The next step is creating a deployment and a service for the application by creating kubernetes manifest files in your project directory. The simplest possible configuration is something like so:

# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-image-name-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: my-image-name
  template:
    metadata:
      labels:
        app: my-image-name
    spec:
      containers:
        - name: my-image-name
          image: my-image-name
          imagePullPolicy: Never # Use for local image
          ports:
            - containerPort: 8000 # Use the port your application runs on

Note that the imagePullPolicy is set to Never because we are using a local image with the implied tag latest. Specifying a specific tag should make this unnecessary, otherwise the default behaviour is to try to pull the image from Docker Hub, which will fail each time (or worse, deploy something unexpected).

In addition to matching the exposed port of the container, your application should be configured to bind to any incoming address (0.0.0.0), not just localhost.

# service.yaml
apiVersion: v1
kind: Service
metadata:
  name: my-image-name-service
spec:
  type: NodePort
  ports:
    - port: 8000
      nodePort: 30080
  selector:
    app: my-image-name

With these files in place, we can create the deployment and service respectively using kubectl apply -f <file-name> for each. They can be verified using: kubectl get deployments and kubectl get services.

If there are any issues, logs can be checked using: kubectl logs <pod-name>, and the pod name can be found using kubectl get pods.

Remember to specify environment variables in the deployment.yaml file under env in the containers specification if your application requires them.

If you’re running docker inside a linux virtual machine, port 30080 should already be exposed. If you’re running using Docker Desktop, there’s one more step which requires forwarding a local port to the service port. This can be done using:

kubectl port-forward service/my-image-name-service 30080:8000

This will map the service to localhost:30080 on your local machine. Launch it in tmux or append the command with an ampersand as it will block the terminal otherwise.

Fin. Now deploy to prod on a Friday afternoon and you’re done!

Notes from Stephen Wolfram's ChatGPT primer

Source Material: Stephen Wolfram, 2023-02-14

ChatGPT is a large-scale transformer-based language model that is designed to predict the next word in a sentence given the context of what has been said. It is a neural network with 175 billion parameters that has been trained on a vast corpus of text, enabling it to form and apply a semantic structure to human language.

The name ChatGPT stands for Generative Pre-trained Transformer. Generative means that the model is capable of generating new text rather than just recognizing patterns in existing text. Pre-trained means that the model has been trained on a large corpus of text before being fine-tuned for a specific task. Transformer refers to the specific type of neural network architecture which is designed to better handle long-term dependencies between words in a sentence.

To accomplish its task, ChatGPT uses a technique known as unsupervised learning, which allows it to learn patterns in the data without being explicitly taught. Instead of being trained on explicit examples of inputs and their associated outputs like in supervised learning, the model is given a large corpus of text and is trained to predict the next word in a sentence by masking the latter part of the sentence and having it predict what should come next. It then compares what it generated with the masked text, and iteratively adjusts its parameters to minimize the error.

To evaluate how well the model performs on each iteration, a loss function is used. The loss function calculates how far away the model’s predictions are from the desired outcome, and the neural net weights are adjusted in a way that minimizes the result of the loss function.

Training the model both optimizes the neural net weights and produces embeddings, which are a way of representing the meaning of words as arrays of numbers (in the vague, undefinable sense of ‘meaning’). Nearby words are represented by nearby numbers. ChatGPT takes this concept further by generating embeddings not just for individual words, but for entire sequences of words.

These embeddings are then used to predict the probabilities of different words that might come next in a sentence. This is accomplished using a transformer architecture, which is designed to better handle dependencies between tokens in the input and output even when they are far from each other in the input sequence. One of the defining features of the transformer is its use of an attention mechanism, which involves certain neurons focusing more on relevant parts of the sequence than others. This allows ChatGPT to take into account the context of the conversation that’s taken place, which can inform the next token that’s generated. The attention mechanism also allows ChatGPT to capture context from the prior conversation even when that context is not adjacent to the token being generated. This is the main reason ChatGPT comes across as a coherent entity.

Finally, ChatGPT uses a temperature setting to introduce a degree of randomness into its predictions, which can make the output more diverse and interesting.


What strikes me as the most profound point Stephen makes is the success of ChatGPT as a scientific discovery in that it shows that there may be simple rules that describe how the semantics of human language can be arranged that we ourselves don’t yet understand. Studying the pathways and structures ChatGPT uses could help deepen our own understanding of human language.

LLM Round-up

When I asked ChatGPT what type of AI it is, this is what it said:

ChatGPT is a type of language model, specifically a variant of a transformer model called the Generative Pretrained Transformer (GPT). It is an example of artificial general intelligence (AGI), which means that it is capable of performing a wide range of natural language processing tasks, such as text generation, summarization, translation, and question answering, among others.

I know some of these words 🤦‍♂️.

I’ll never be an AI researcher, but I do want to have a laymen’s understanding of what the tool is capable of and how it works. I’ve been searching for educational resources that lie somewhere in the sweet spot of accessibility and depth. I want to understand the basic terminology, like what a transformer is, and also understand a large language model’s shortcomings. And I want to understand where a language model fits into the broader landscape of AI.

To that end, I’m compiling a few resources that seem to be well regarded. I’ll treat this post as a digital bookshelf, and will update it with my thoughts and notes as I work through the material.

We’ll start with 35,000 words from Stephen Wolfram as a primer, then move into Andrej Karpathy’s Zero to Hero lecture series.

Three managerial principles I wrote down in 2015

In my mid-20s—okay, mid- to late-20s—I worked as the Assistant Manager of a restaurant. It was a formative time for me, though I didn’t fully realize it then.

In 2015 I jotted down three managerial principles on Medium that had been percolating in my mind. (Medium at the time was quite in-vogue). Having forgotten all about this account, I stumbled upon it and it struck me as uncharacteristically good advice from my otherwise uninformed young self.

The following is dated May 10, 2015.

I keep three things in mind on a day to day basis that help me decide on a particular course of action, how best to conduct myself in a given situation, or where to focus my energy where demand outstrips supply.

  1. As a leader, your most critical function is teaching and supporting your people. You are not your business — your employees are, and your job is to develop them into strong ambassadors of the brand. The quality of your product, the happiness of your customers, the volume of your sales all follow from the attitude and competence of your staff. Interpersonal skills and the ability to make improvements and criticisms without being hurtful or discouraging are perhaps the most critical attribute of any manager. Positive feedback is important.
  2. Organization and prioritization. Simply find a way to ensure that no details are missed and nothing slips through the cracks. Do what you can to minimize surprises; be flexible and adaptable when they do occur.
  3. Managing your own attitude. You set the tone and energy. I work in a fast-paced, high stress business where this is particularly important. Letting small incidents send you into an outwardly visible tailspin is a quick and sure way of crashing a complex or time-sensitive operation. Your staff feed off your energy and positivity, and being mindful of that is critically important. A friend once told me, “head to work each day as if you were going to war.” — Expect and anticipate problems in a manner such that they won’t ruin your day when they occur. Keep it light and save the over reactions for incidents that warrant it.

Along with hard work and volition, these three points seem to work well for me. I’m still young and stupid, so take with a dash of salt.

Will update as I make more mistakes and learn from them.


Bonus: I also wrote this, titled reviews_cometh.txt:

The customers grow restless in the evenings. They arrive in numbers by moonlight and street lamp, hungry for bread and wine. It is in these hours, as dusk comes ever sooner with each revolution, they wait.

At the door, they wait.

For drinks, they wait.

For their meat and their honey…

They wait.

In these desperate moments — moments of hope then anger, denial then acceptance, they reach upon their mobile devices and tap.

For the negative reviews cometh.

How to convert a subdirectory to its own git repository/submodule

# Make a clone of the repository
git clone <your_project> <new_submodule>

# CD into the new repository
cd <new_submodule>

# Use filter-branch to isolate the subdirectory
git filter-branch --subdirectory-filter 'path/to/subdirectory' --prune-empty -- --all

# Remove the old remote
git remote rm <remote_name>

git filter-branch lets you rewrite Git revision history and apply custom filters on each revision. It should be used with caution! From the Git Manual (emphasis mine):

git filter-branch has a plethora of pitfalls that can produce non-obvious manglings of the intended history rewrite (and can leave you with little time to investigate such problems since it has such abysmal performance). These safety and performance issues cannot be backward compatibly fixed and as such, its use is not recommended.

Nevertheless, since we live dangerously around these parts, here’s what’s happening with the filter-branch command should you choose to use it:

  • --subdirectory-filter is the main command. It takes a path to a subdirectory and filters the repository to only include that subdirectory.
  • --prune-empty removes commits that don’t change anything.
  • -- --all is a way to pass arguments to the internal git rev-list command. In this case, it’s telling git to run the command on all branches.

Once the operation is done, you’ll have a new repository with only the subdirectory you specified. You can then push it to a new remote and add it as a submodule to your original repository.

git filter-branch is a destructive operation. While I’ve used the above command with success on my own repositories, your mileage may vary and I probably can’t help you if something goes wrong.

GPG Field Guide

Jürgen Gmach on his personal blog:

While I need to use GPG pretty regularly, I always have to look up the commands - they just don’t stick :-)

Over the last couple of months I collected every command I had to use. Enjoy!

🔖 One for the bookmarks: GPG - All I Need to Know

How to configure a python script as the default build task in VS Code

This snippet goes in your .vscode/tasks.json file. It will run the current python file as the default build task.

  • command will invoke the python interpreter that is configured in your workspace settings.
  • group will make this task the default build task.
  • presentation:focus will focus the terminal window when the task is run.
  • presentation:reveal will reveal the terminal window when the task is run.

Change args to any python file if you’d like to run the same file each time.

{
  // See https://go.microsoft.com/fwlink/?LinkId=733558
  // for the documentation about the tasks.json format
  "version": "2.0.0",
  "tasks": [
    {
      "label": "Python: current file",
      "type": "shell",
      "command": "${command:python.interpreterPath}",
      "args": ["${file}"],
      "problemMatcher": [],
      "group": {
        "kind": "build",
        "isDefault": true
      },
      "presentation": {
        "focus": true,
        "reveal": "always"
      },
      "options": {
        "cwd": "${workspaceFolder}"
      }
    }
  ]
}

Introducing bulletproof-python

I owe a debt of gratitude to Claudio Jolowicz for introducing me to modern python tooling. His Hypermodern Python template and associated blog series are a great resource.

The Hypermodern system is built around nox and poetry.

  • nox is similar in spirit to tox but uses python scripts instead of configuration files.
  • It’s very powerful, but since I’m not particularly clever, I find the hypermodern stack difficult to understand.

I decided to throw together a simple template that uses tox and pip-tools. The template uses:

It strives to be clean, simple and hard to break, albeit incomplete and likely suitable only for small libraries without broad distribution.

🌱 The template is available here: bulletproof-python.

TIL about AlDente by AppHouseKitchen

The ‘optimize battery charging’ feature in macOS has never worked well for me. It tends to stay at 100%, which is not great for my battery.

I recently discovered AlDente by AppHouseKitchen. It’s a paid app that allows you to set a maximum charge level for your battery. There is a free tier I believe, but I almost always eat the paid versions of apps that I like and use.

I’ve been using it for a few days now and it’s been working great. If you’re similarly annoyed by macOS’s battery charging behavior, I highly recommend it. Not a sponsored post, just a (so far) happy user.